Deploying OpCenter in a Dedicated VPC in AWS
Every AWS account has a default VPC that is pre-configured with an internet gateway and public subnets with associated route tables. Although more complicated to configure, a dedicated VPC allows you to customize the underlying network, for example, to comply with your organization's security policy or to connect with your on-premises network.
Note
AWS provides features to customize VPCs in many ways. The examples shown in this guide reflect one set of configuration options. Your organization may mandate a different set of configuration options.
To deploy OpCenter in a public subnet in a dedicated VPC (as shown in the figure), complete the following steps.
Decide on the IP Address Structure for the VPC
The VPC requires the use of IPv4 addresses, but you have the option of using IPv4 and IPv6 simultaneously. The default is to use IPv4 addresses only.
Choose the source of your IPv4 CIDR block for the VPC. The options are the following.
- Manually enter a CIDR block from the RFC1918 private IPv4 address space
- Manually enter a CIDR block from your own IPv4 address space (Bring-Your-Own-IP)
- Automatically allocate an IPv4 CIDR block using Amazon VPC IP Address Manager (IPAM]
Decide on how to reach the OpCenter from the Public Internet
A dedicated VPC supports two types of subnets: public and private.
- Public: resources with a public IPv4 addresses in a public subnet have a direct route to an Internet gateway that allows two-way access to the public Internet
- Private: resources, even with public IPv4 addresses, in a private subnet do not have a direct route to an Internet gateway and require an Internet gateway or a NAT gateway to reach the public Internet
This section describes an architecture in which the OpCenter has a public IPv4 address (and a private IPv4 address) and operates in a public subnet. Worker nodes are placed in private subnets and have private IPv4 addresses only.
Determine the connectivity requirements for Worker Nodes
Intra-VPC access
The OpCenter communicates with worker nodes (EC2 instances) using private IPv4 address—no intermediate gateway required. Usually, worker nodes require access to basic AWS services, such as EBS and S3, and certain jobs may require advanced services such as AWS Lambda.
Worker nodes can access S3 from within the VPC using an S3 gateway VPC endpoint (as long as the S3 buckets are in the same region as the VPC).
Worker nodes can access EBS from within the VPC using an interface VPC endpoint.
In both cases, network traffic between worker nodes and AWS services remains in the AWS network and does not traverse the public Internet.
Access to the public Internet
A job running on a worker node may require access to the public Internet, for example, to retrieve data stored on another cloud service provider's network. In this case, you must run the worker nodes in a public subnet or deploy a NAT Gateway where the public IPv4 address assigned to the NAT gateway is used as the source address for traffic originating from worker nodes.
A NAT gateway incurs charges. There are no direct charges for a VPC endpoint (gateway or interface).
Create a Dedicated VPC
Create a dedicated VPC by completing the following steps.
- Log in to the AWS Management Console and go to the VPC Dashboard
- At the top of the page, click Create VPC
-
Select your VPC settings (the default choices work for demonstration purposes) and, at the bottom of the page, click Create VPC
-
Monitor the progress
-
At the bottom of the page, click View VPC. You can get to this view at any time by going to the VPC dashboard and selecting Your VPCs in the left-hand panel.
Note
After creating the VPC, you can manage the VPC (delete, change settings, and so on) or add (or delete) subnets from the VPC dashboard.
Create VPC Endpoints
S3 Gateway VPC Endpoint
For an EC2 instance to access S3, you require an S3 gateway VPC endpoint. If you select S3 Gateway while filling in the form to create your VPC, the S3 gateway VPC endpoint is created automatically.
Note
The S3 gateway VPC endpoint provides access to S3 buckets located in the same region as the dedicated VPC. To access S3 buckets in other regions, you must deploy an Access Point, such as a Multi-region Access Point (MRAP). See details below.
Interface VPC Endpoint for EBS
For an EC2 instance to access EBS, you require an interface VPC endpoint.
Before you configure the interface VPC endpoint, create a security group, to open ports for inbound SSH, HTTPS, and NFS traffic, as follows.
- Go to the EC2 Dashboard and, from the left-hand panel, select Network & Security ->Security Groups
- On the top, right-hand side, click Create security group
- Fill in the form, giving the security group a descriptive name, selecting your VPC, creating inbound rules to allow access to ports for SSH (port 22), HTTPS (port 443), and NFS (port 2049) from any source. For the outbound rules, allow access from any source to any source.
To create the interface VPC endpoint, complete the following steps.
- Go to the VPC Dashboard and, from the left-hand panel, select PrivateLink and Lattice->Endpoints
- On the top, right-hand side, click Create endpoint
-
Fill in the form and then click Create endpoint
-
Choose a Name tag and select AWS Services
-
In the Services search bar, enter ec2 and then select Service Name=com.amazonaws.[YOUR REGION].ec2. Use the drop-down menu, to select your VPC.
-
Check the Enable DNS name box and select IPv4 under DNS record IP type
-
Select your Availability Zones, use the drop-down menu to select a private subnet in each Availability Zone, and select IPv4 as the IP address type
-
Select the Security group you created previously to open inbound ports for SSH, HTTPS, and NFS, and choose Full access under Policy
-
Scroll down to the bottom of the page and click Create endpoint on the right-hand side
-
Deploy OpCenter in the Dedicated VPC
Complete the following steps.
- Go to the AWS Marketplace and, in the left-hand panel, click Discover products
- Under Search AWS Marketplace products, enter "MemVerge Memory Machine Cloud" and click on the result
- Click View purchase options. Follow the instructions to subscribe or, if you have already subscribed, click Launch your software
-
Choose AWS CloudFormation and use the drop-down menu to select your region
-
Click Launch with CloudFormation
-
Under Create stack, select Choose an existing template and Amazon S3 URL. Then click Next.
-
On the Specify stack details page, fill in the fields as follows (seek guidance from your AWS administrator if needed)
- Stack name: Enter a unique stack name, using the allowed characters
- 11OpCenterType: Accept the default (small) or use the pull-down menu to change the size of the VM to run OpCenter. PoC is the smallest VM.
- 12CustomizedInstanceType: To override the selection in 11OpCenterType, enter the specific name of a compute instance (as long as it is based on x86 architecture). For example, enter
m6i.4xlarge
. - 13DiskType: Use the pull-down menu to select the type of EBS volume (default is
gp3
) created for the OpCenter's internal operations - 14KeyName: Use the pull-down menu to select the key pair you created previously
- 21VpcId: From the pull-down menu, select your dedicated VPC
- 22SubnetId: From the pull-down menu, select a public subnet
- 23AvailabilityZone: Use the pull-down menu to select the Availability Zone in which the selected public subnet resides. You can get this information by going to the VPC dashboard and selecting Subnets from the left-hand panel. The Subnets table shows the mapping of each subnet to its availability zone.
- 31PublicService: Select True to assign a public IP address to the OpCenter
- 32ExternalAccessCidr: Provide the range of public IPv4 addresses allowed to access the OpCenter server. Enter the smallest range of addresses that includes the hosts that need to access the OpCenter over the public Internet. The range can be as small as a /32 CIDR block. To allow access from any address, enter 0.0.0.0/0 (this is not recommended).
- 33SshCidr: Provide the range of IP addresses allowed to access the OpCenter server using ssh. Enter the smallest range of addresses that includes the hosts that need
ssh
access to the OpCenter over the public Internet. The range can be as small as a /32 CIDR block. To allow access from any address, enter 0.0.0.0/0 (this is not recommended). - 34AllowInternalAccess: Select True to allow worker nodes to communicate with each other using their private IP addresses
- Click Next
-
On the Configure stack options page, complete the following.
- Keep the default options
- Check the box at the bottom of the page to acknowledge that you are aware CloudFormation may create IAM resources
- Click Next
-
On the Review and create page, scroll to the bottom and click Submit
-
When the CloudFormation process completes, log in to the OpCenter (using the CLI or the web interface) and apply a valid license
Run a single-region "Hello World" job
Test your deployment by running a simple job script that accesses an S3 bucket in the same region as your OpCenter.
Use the web interface or the float
cli to submit a job. For example, log in to the OpCenter from a terminal session and enter the following float
command.
$ float submit -n JOB_NAME -j jobscript1 -i cactus:latest -c 4 -m 8 --dataVolume [size=10]:/mydata --noPublicIP --subnet PRIVATE_SUBNET
where jobscript1
is a file containing the following shell commands.
#!/usr/bin/bash
# use aws cli tools
export PATH=/opt/aws/dist:$PATH
LOG_PATH=$1
LOG_FILE=$LOG_PATH/output
touch $LOG_FILE
exec >$LOG_FILE 2>&1
echo "Congratulations! You have submitted your first dedicated VPC job"
cd /mydata
echo "Hello World" >test100.file
echo "Job complete" >> test100.file
aws s3 ls s3://s3gatewayinuseast1demo
aws s3 cp test100.file s3://s3gatewayinuseast1demo
aws s3 ls s3://s3gatewayinuseast1demo
aws s3 cp s3://s3gatewayinuseast1demo/test100.file home1.file
ls *.*
Replace:
JOB_NAME
with a name to identify the jobPRIVATE_SUBNET
with one of the private subnets in the dedicated VPC
Check that the upload and download work successfully.
Congratulations! You have submitted your first dedicated VPC job
Completed 25 Bytes/25 Bytes (340 Bytes/s) with 1 file(s) remaining
upload: ./test100.file to s3://s3gatewayinuseast1demo/test100.file
2025-09-10 22:11:09 25 test100.file
Completed 25 Bytes/25 Bytes (547 Bytes/s) with 1 file(s) remaining
download: s3://s3gatewayinuseast1demo/test100.file to ./home1.file
home1.file
test100.file
Create a Multi-Region Access Point to use with PrivateLink
To access S3 buckets outside of the region where the dedicated VPC is located, create a Multi-Region Access Point (MRAP).
The MRAP virtualizes the regions in which the S3 buckets are located, so that you can access an S3 bucket without knowing where the bucket is located. In this sense, the MRAP acts like a Content Delivery Network (CDN) in which requests are routed to the nearest S3 bucket configured into the MRAP. In the background, files in S3 buckets in different regions are kept in sync. For example, an application may write to a file to an S3 bucket in one region, but another application may later read that same file from an S3 bucket in a different region.
The MRAP is a routing mechanism, not a service that fulfills S3 requests. You must have the right permissions to access the MRAP and, in addition, the requisite permissions to access the S3 bucket.
You can combine MRAP with PrivateLink to enable worker nodes (with private IP addresses only) in private subnets to access S3 buckets in any region without leaving the AWS network.
To enable PrivateLink, you must do the following.
- Configure an s3-global VPC endpoint interface
- Enable private DNS names
With this configuration, an S3 access request from a worker node (in a private subnet) is routed via the MRAP to the s3-global endpoint interface which resolves to a private IP address in the private subnet CIDR block. With this mechanism, worker nodes can send and receive S3-related traffic without leaving the AWS network.
The figure shows how to use MRAP to access an out-of-region S3 bucket.
MRAP has several powerful features. The following example uses a subset of those features to show how a worker node in a private subnet in a VPC in region us-east-1
accesses an S3 bucket in region us-west-2
.
- Create an S3 bucket in
us-west-2
named, for example,mrapwest2
-
Specify the permissions for accessing the S3 bucket, for example, specify a policy that delegates access control to the MRAP (replace AWS_ACCOUNT_ID with your AWS account ID)
{ "Version": "2012-10-17" , "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AWS_ACCOUNT_ID:root" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::mrapwest2", "arn:aws:s3:::mrapwest/*" ], "Condition": { "StringEquals": { "s3:DataAccessPointAccount": "AWS_ACCOUNT_ID" } } } ] }
-
Create an MRAP that contains one S3 bucket (for example,
mrapwest2
)- Log in to the AWS Management Console and go to the S3 dashboard
- From the left-hand panel, select Multi-region Access Points
- At the top, click Create Multi-region Access Point
-
Fill in the form (example entries are shown in the figure) and then, at the bottom, click Create Multi-region Access Point
-
Copy the MRAP ARN. It is a string that looks like this
arn:aws:s3::xxxxxx:accesspoint/mxsgz1a5c6cjz.mrap
wherexxxxxx
is your AWS account ID. The portion that looks likemxsgz1a5c6cjz.mrap
is the alias for the MRAP.
Create com.amazonaws.s3-global.accesspoint
Complete the following steps.
- Log in to the AWS Management Console and go to the VPC dashboard
- From the left-hand panel, select PrivateLink and Lattice -> Endpoints
- At the top, right-hand side, click Create endpoint
-
Fill in the form (example entries are shown in the figure) and then, at the bottom, click Create endpoint
Run a multi-region "Hello World" job
Test your deployment by running a simple job script that accesses an S3 bucket in a different region from where your OpCenter is deployed.
Use the web interface or the float
CLI to submit a job. For example, log in to the OpCenter from a terminal session and enter the following float
command.
$ float submit -n JOB_NAME -j jobscript2 -i cactus:latest -c 4 -m 8 --dataVolume [size=10]:/mydata --noPublicIP --subnet PRIVATE_SUBNET
where jobscript2
is a file containing the following shell commands.
#!/usr/bin/bash
# use aws cli tools
export PATH=/opt/aws/dist:$PATH
LOG_PATH=$1
LOG_FILE=$LOG_PATH/output
touch $LOG_FILE
exec >$LOG_FILE 2>&1
echo "Congratulations! You have submitted your first MRAP job"
cd /mydata
echo "Hello World" >test5.file
echo "Job complete" >> test5.file
aws s3 cp test5.file s3://arn:aws:s3::AWS_ACCOUNT_ID:accesspoint/mxsgz1a5c6cjz.mrap
aws s3 ls s3://arn:aws:s3::AWS_ACCOUNT_ID:accesspoint/mxsgz1a5c6cjz.mrap
aws s3api get-object --bucket arn:aws:s3::AWS_ACCOUNT_ID:accesspoint/mxsgz1a5c6cjz.mrap --key test5.file home1.file
ls *.*
Replace:
JOB_NAME
with a name to identify the jobPRIVATE_SUBNET
with one of the private subnets in the dedicated VPCAWS_ACCOUNT_ID
with your AWS account ID-
mxsgz1a5c6cjz.mrap
with the alias for your MRAPNote
The upload uses
aws cli
and the download usesaws s3api
.
Check that the upload and download work successfully.
Congratulations! You have submitted your first MRAP job
Completed 25 Bytes/25 Bytes (64 Bytes/s) with 1 file(s) remaining
upload: ./test5.file to s3://arn:aws:s3::xxxxxxxx:accesspoint/mxsgz1a5c6cjz.mrap/test5.file
2025-09-10 19:30:00 25 test5.file
{
"AcceptRanges": "bytes",
"LastModified": "2025-09-10T19:30:00+00:00",
"ContentLength": 25,
"ETag": "\"7cf53ce5e3e6802a32375b008e45dd91\"",
"ContentType": "binary/octet-stream",
"ServerSideEncryption": "AES256",
"Metadata": {}
}
home1.file
test5.file