Installing the Migration Toolkit for Containers
After you install the Migration Toolkit for Containers Operator on OKD 4.8 by using the Operator Lifecycle Manager, you manually install the legacy Migration Toolkit for Containers Operator on OKD 3.
By default, the MTC web console and the pod run on the target cluster. You can configure the Migration Controller
custom resource manifest to run the MTC web console and the Migration Controller
pod on a source cluster or on a remote cluster.
After you have installed MTC, you must configure an object storage to use as a replication repository.
You must install the Migration Toolkit for Containers (MTC) version that is compatible with your OKD version.
You cannot install MTC 1.6.x on OKD versions 3.7 to 4.5 because the custom resource definition API versions are incompatible.
You can migrate workloads from a source cluster with MTC 1.5.1 to a target cluster with MTC 1.6.x as long as the MigrationController
custom resource and the MTC web console are running on the target cluster.
1 Latest z-stream release.
You can install the legacy Migration Toolkit for Containers Operator manually on OKD 3.
Prerequisites
You must be logged in as a user with
cluster-admin
privileges on all clusters.You must have access to
registry.redhat.io
.You must have
podman
installed.You must create an and copy it to each node in the cluster.
Procedure
Log in to
registry.redhat.io
with your Red Hat Customer Portal credentials:Download the
operator.yml
file for your OKD version:OKD version 3.7:
$ sudo podman cp $(sudo podman create \
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.1):/operator-3.7.yml ./
OKD version 3.9 to 3.11:
$ sudo podman cp $(sudo podman create \
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.1):/operator.yml ./
Download the
controller.yml
file:$ sudo podman cp $(sudo podman create \
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.1):/controller.yml ./
Log in to your OKD 3 cluster.
Verify that the cluster can authenticate with
registry.redhat.io
:$ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
Create the Migration Toolkit for Containers Operator object:
$ oc create -f operator.yml
Example output
namespace/openshift-migration created
rolebinding.rbac.authorization.k8s.io/system:deployers created
serviceaccount/migration-operator created
customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
role.rbac.authorization.k8s.io/migration-operator created
rolebinding.rbac.authorization.k8s.io/migration-operator created
clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
deployment.apps/migration-operator created
Error from server (AlreadyExists): error when creating "./operator.yml":
rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
Error from server (AlreadyExists): error when creating "./operator.yml":
rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
1 You can ignore Error from server (AlreadyExists)
messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OKD 3 that are provided in later releases.Create the
MigrationController
object:$ oc create -f controller.yml
Verify that the MTC pods are running:
$ oc get pods -n openshift-migration
You install the Migration Toolkit for Containers Operator on OKD 4.8 by using the Operator Lifecycle Manager.
Prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
In the OKD web console, click Operators → OperatorHub.
Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
Select the Migration Toolkit for Containers Operator and click Install.
Click Install.
On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.
Click Migration Toolkit for Containers Operator.
Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
Click Create.
Click Workloads → Pods to verify that the MTC pods are running.
For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController
custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy
object.
For OKD 4.2 to 4.8, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP
header to the API server. Otherwise, an Upgrade request required
error is displayed. The MigrationController
CR uses SPDY to run commands within remote pods. The Upgrade HTTP
header is required in order to open a websocket connection with the API server.
Direct volume migration
DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
Prerequisites
- You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
Get the
MigrationController
CR manifest:$ oc get migrationcontroller <migration_controller> -n openshift-migration
Update the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: <migration_controller>
namespace: openshift-migration
...
spec:
stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
httpProxy: http://<username>:<password>@<ip>:<port> (2)
httpsProxy: http://<username>:<password>@<ip>:<port> (3)
noProxy: example.com (4)
1 Stunnel proxy URL for direct volume migration. 2 Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http
.3 Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy
is used for both HTTP and HTTPS connections.4 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
nor thehttpsProxy
field is set.Save the manifest as
migration-controller.yaml
.Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
For more information, see Configuring the cluster-wide proxy.
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
MTC supports the for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The following storage providers are supported:
Multi-Cloud Object Gateway (MCG)
Amazon Web Services (AWS) S3
Google Cloud Platform (GCP)
Microsoft Azure Blob
Generic S3 object storage, for example, Minio or Ceph S3
All clusters must have uninterrupted network access to the replication repository.
If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
Configuring Multi-Cloud Object Gateway
You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Installing the OpenShift Container Storage Operator
You can install the OpenShift Container Storage Operator from OperatorHub.
Prerequisites
Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager site as shown in Obtaining the installation program in the installation documentation for your platform.
If you have the pull secret, add the
redhat-operators
catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.
Procedure
In the OKD web console, click Operators → OperatorHub.
Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
Select the OpenShift Container Storage Operator and click Install.
Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.
Creating the Multi-Cloud Object Gateway storage bucket
You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).
Procedure
Log in to the OKD cluster:
$ oc login -u <username>
Create the
NooBaa
CR configuration file,noobaa.yml
, with the following content:apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
name: <noobaa>
namespace: openshift-storage
spec:
dbResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
coreResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
Create the
NooBaa
object:$ oc create -f noobaa.yml
Create the
BackingStore
CR configuration file,bs.yml
, with the following content:kind: BackingStore
metadata:
- noobaa.io/finalizer
labels:
app: noobaa
name: <mcg_backing_store>
namespace: openshift-storage
spec:
pvPool:
numVolumes: 3 (1)
resources:
requests:
storage: <volume_size> (2)
storageClass: <storage_class> (3)
type: pv-pool
1 Specify the number of volumes in the persistent volume pool. 2 Specify the size of the volumes, for example, 50Gi
.3 Specify the storage class, for example, gp2
.Create the
BackingStore
object:$ oc create -f bs.yml
Create the
BucketClass
CR configuration file,bc.yml
, with the following content:Create the
BucketClass
object:$ oc create -f bc.yml
Create the
ObjectBucketClaim
CR configuration file,obc.yml
, with the following content:apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: <bucket>
namespace: openshift-storage
spec:
bucketName: <bucket> (1)
storageClassName: <storage_class>
additionalConfig:
bucketclass: <mcg_bucket_class>
1 Record the bucket name for adding the replication repository to the MTC web console. Create the
ObjectBucketClaim
object:$ oc create -f obc.yml
Watch the resource creation process to verify that the
ObjectBucketClaim
status isBound
:$ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
This process can take five to ten minutes.
Obtain and record the following values, which are required when you add the replication repository to the MTC web console:
S3 endpoint:
$ oc get route -n openshift-storage s3
S3 provider access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
S3 provider secret access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode
You can configure an Amazon Web Services (AWS) S3 storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
The AWS S3 storage bucket must be accessible to the source and target clusters.
You must have the AWS CLI installed.
-
You must have access to EC2 Elastic Block Storage (EBS).
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Procedure
Create an AWS S3 bucket:
$ aws s3api create-bucket \
--bucket <bucket> \ (1)
--region <bucket_region> (2)
Create the IAM user
velero
:$ aws iam create-user --user-name velero
Create an EC2 EBS snapshot policy:
$ cat > velero-ec2-snapshot-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
}
]
}
EOF
Create an AWS S3 access policy for one or for all S3 buckets:
$ cat > velero-s3-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::<bucket>/*" (1)
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::<bucket>" (1)
]
]
}
1 To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify *
instead of a bucket name as in the following example:Example output
"Resource": [
"arn:aws:s3:::*"
Attach the EC2 EBS policy to
velero
:$ aws iam put-user-policy \
--user-name velero \
--policy-name velero-ebs \
--policy-document file://velero-ec2-snapshot-policy.json
Attach the AWS S3 policy to
velero
:$ aws iam put-user-policy \
--user-name velero \
--policy-name velero-s3 \
--policy-document file://velero-s3-policy.json
Create an access key for
velero
:$ aws iam create-access-key --user-name velero
{
"AccessKey": {
"UserName": "velero",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, (1)
"AccessKeyId": <AWS_ACCESS_KEY_ID> (1)
}
}
1 Record the AWS_SECRET_ACCESS_KEY
and theAWS_ACCESS_KEY_ID
for adding the AWS repository to the MTC web console.
Configuring Google Cloud Platform
You can configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
The GCP storage bucket must be accessible to the source and target clusters.
You must have
gsutil
installed.If you are using the snapshot copy method:
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Procedure
Log in to
gsutil
:$ gsutil init
Example output
Set the
BUCKET
variable:$ BUCKET=<bucket> (1)
Create a storage bucket:
$ gsutil mb gs://$BUCKET/
Set the
PROJECT_ID
variable to your active project:$ PROJECT_ID=`gcloud config get-value project`
Create a
velero
IAM service account:$ gcloud iam service-accounts create velero \
--display-name "Velero Storage"
Create the
SERVICE_ACCOUNT_EMAIL
variable:$ SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list \
--filter="displayName:Velero Storage" \
--format 'value(email)'`
Create the
ROLE_PERMISSIONS
variable:$ ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
Create the
velero.server
custom role:$ gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
Add IAM policy binding to the project:
$ gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
Update the IAM service account:
$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
Save the IAM service account keys to the
credentials-velero
file in the current directory:$ gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL
You can configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
You must have an .
You must have the Azure CLI installed.
The Azure Blob storage container must be accessible to the source and target clusters.
If you are using the snapshot copy method:
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Procedure
Set the
AZURE_RESOURCE_GROUP
variable:$ AZURE_RESOURCE_GROUP=Velero_Backups
Create an Azure resource group:
$ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> (1)
1 Specify your location. Set the
AZURE_STORAGE_ACCOUNT_ID
variable:$ AZURE_STORAGE_ACCOUNT_ID=velerobackups
Create an Azure storage account:
$ az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_RESOURCE_GROUP \
--sku Standard_GRS \
--encryption-services blob \
--https-only true \
--kind BlobStorage \
--access-tier Hot
Set the
BLOB_CONTAINER
variable:$ BLOB_CONTAINER=velero
Create an Azure Blob storage container:
$ az storage container create \
-n $BLOB_CONTAINER \
--public-access off \
--account-name $AZURE_STORAGE_ACCOUNT_ID
Create a service principal and credentials for
velero
:Save the service principal credentials in the
credentials-velero
file:$ cat << EOF > ./credentials-velero
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
AZURE_CLOUD_NAME=AzurePublicCloud
Additional resources for configuring a replication repository
:installing-3-4!: