Amazon Web Services
Compute
- AWS EC2
- EC2 Instance Types
- EC2 Pricing Models
- EC2 Auto Scaling
- Elastic Load Balancing-ELB
- AWS Lambda β Serverless Computing
- Amazon Lightsail
- AWS Elastic Beanstalk
- AWS Fargate
- Amazon ECS (Elastic Container Service)
- Amazon EKS (Elastic Kubernetes Service)
Storage
- S3 vs. EBS vs. EFS
- Amazon S3 (Simple Storage Service)
- Amazon S3 Storage Classes
- Amazon EBS (Elastic Block Store)
- Amazon EFS (Elastic File System)
- AWS Storage Gateway
- AWS Snowball
- Amazon FSx
- AWS Backup
Database Services
- Amazon RDS
- Amazon Aurora
- Amazon DynamoDB
- Amazon ElastiCache
- Amazon Redshift
- AWS Database Migration Service (DMS)
- Amazon Neptune
- Amazon DocumentD
Networking and Content Delivery
- Amazon VPC
- Subnets
- Internet Gateway
- AWS Direct Connect
- AWS Route 53
- AWS CloudFront
- AWS Transit Gateway
- Elastic IP Addresses
DynamoDB
- DynamoDB Global Table vs Regular DynamoDB Table
- DynamoDB Streams
- Athena query data to DynamoDB
- Athena Query Results with DynamoDB
- PySpark DataFrame to DynamoDB
Redshift
Lambda
Glue
Lambda
Security
π AWS Database Migration Service (DMS) β Simplifying Cloud Database Migrations
Migrating databases from one environment to another has always been a complex and risky task. Downtime, data loss, schema mismatches, and compatibility issues often make migration stressful for businesses.
AWS Database Migration Service (DMS) solves this by enabling organizations to migrate databases quickly, securely, and with minimal downtime to AWS. Whether you are moving from on-premises to AWS, between AWS services, or even from one database engine to another, DMS makes the process seamless.
Think of DMS as a bridge that transfers your data from source to target while keeping both environments in sync until the migration is complete.
βοΈ Key Features of AWS DMS
- Supports Homogeneous Migrations β Example: Oracle β Oracle, MySQL β MySQL.
- Supports Heterogeneous Migrations β Example: Oracle β PostgreSQL, SQL Server β Aurora.
- Continuous Replication β Keeps source and target synchronized until cutover.
- Minimal Downtime β Applications continue running during migration.
- Wide Compatibility β Works with MySQL, PostgreSQL, SQL Server, Oracle, MariaDB, MongoDB, SAP ASE, and more.
- Integration with AWS Schema Conversion Tool (SCT) β Helps convert schema when moving between different engines.
- Scalable and Secure β Managed service with built-in monitoring, encryption, and fault tolerance.
- Flexible Targets β Migrate to Amazon RDS, Aurora, Redshift, S3, DynamoDB, or EC2 databases.
- Automatic Failover β Ensures resilience during migration.
- Pay-as-you-go β Cost-efficient for small and large migrations.
ποΈ Common Use Cases
Use Case | Description |
---|---|
On-premises to AWS migration | Move legacy databases into the AWS cloud. |
Cross-region migration | Replicate databases between AWS regions for DR or compliance. |
Database consolidation | Merge multiple smaller databases into a central managed database. |
Continuous data replication | Keep two databases synchronized for analytics or reporting. |
Engine upgrades | Move from older SQL Server/Oracle versions to modern cloud-native engines. |
π οΈ Programs
β Migrating MySQL (On-Premises) to Amazon Aurora
# Step 1: Create a replication instance (CLI)aws dms create-replication-instance \ --replication-instance-identifier my-migration-instance \ --replication-instance-class dms.t3.medium \ --allocated-storage 100 \ --engine-version 3.4.6
# Step 2: Define source (MySQL) and target (Aurora) endpointsaws dms create-endpoint \ --endpoint-identifier mysql-source \ --endpoint-type source \ --engine-name mysql \ --username admin --password Pass123 \ --server-name 192.168.1.100 --port 3306
aws dms create-endpoint \ --endpoint-identifier aurora-target \ --endpoint-type target \ --engine-name aurora \ --username masteruser --password Pass456 \ --server-name my-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com \ --port 3306
# Step 3: Create and start migration taskaws dms create-replication-task \ --replication-task-identifier mysql-to-aurora \ --source-endpoint-arn arn:aws:dms:source \ --target-endpoint-arn arn:aws:dms:target \ --migration-type full-load-and-cdc \ --table-mappings file://table-mapping.json
Use Case: Moving a legacy MySQL database from on-premises to Aurora with near-zero downtime.
β Migrating Oracle Database to Amazon Redshift
{ "rules": [ { "rule-type": "selection", "rule-id": "1", "rule-name": "1", "object-locator": { "schema-name": "%", "table-name": "%" }, "rule-action": "include" } ]}
aws dms create-replication-task \ --replication-task-identifier oracle-to-redshift \ --source-endpoint-arn arn:aws:dms:oracle-source \ --target-endpoint-arn arn:aws:dms:redshift-target \ --migration-type full-load \ --table-mappings file://oracle-to-redshift.json
Use Case: Migrate OLTP data from Oracle into Redshift for analytics and BI reporting.
β Migrating SQL Server to Amazon S3 (Data Lake)
aws dms create-endpoint \ --endpoint-identifier sqlserver-source \ --endpoint-type source \ --engine-name sqlserver \ --username sa --password StrongPass123 \ --server-name sql.example.com --port 1433
aws dms create-endpoint \ --endpoint-identifier s3-target \ --endpoint-type target \ --engine-name s3 \ --bucket-name my-datalake-bucket \ --extra-connection-attributes "DataFormat=parquet"
Use Case: Export relational SQL Server data into S3 in Parquet format for big data analytics.
π§ How to Remember AWS DMS for Exams & Interviews
-
Acronym: βMIGRATEβ
- M β Minimal downtime
- I β Integration with multiple databases
- G β Global replication
- R β Real-time synchronization (CDC)
- A β Any-to-any migration (homogeneous + heterogeneous)
- T β Targets like Aurora, Redshift, S3
- E β Easy setup (managed service)
-
Memory Trick: Think of DMS as a moving truck π that carries your database safely from old house (source) to new house (AWS target) without stopping your work.
-
Exam-Focused Pointers:
- Works with homogeneous and heterogeneous migrations.
- Uses replication instance for migration.
- Supports full load, CDC, or both.
- Integrates with AWS Schema Conversion Tool (SCT) for schema transformation.
π― Why It Is Important to Learn AWS DMS
- Cloud Adoption Accelerator β Most companies start their AWS journey by migrating existing databases.
- High-Demand Skill β Database migration expertise is required for cloud architects, data engineers, and DevOps roles.
- Certification Prep β Commonly appears in AWS Solutions Architect, Database Specialty, and Data Analytics exams.
- Real-World Applications β Every enterprise modernizing IT will need DMS professionals.
- Business Value β Saves organizations millions in downtime and migration costs.
π Best Practices
- Use the latest replication instance version for best performance.
- Split large databases into multiple migration tasks.
- Always use AWS SCT when migrating between different database engines.
- Enable CloudWatch monitoring for troubleshooting.
- Perform a test migration before production cutover.
π Conclusion
AWS Database Migration Service (DMS) is a game-changing service that enables businesses to migrate databases securely, efficiently, and with minimal downtime. Whether moving from on-premises to AWS, between cloud regions, or across database engines, DMS ensures a smooth migration journey.
For interviews and exams, remember:
- DMS supports homogeneous + heterogeneous migrations.
- Provides continuous replication with CDC.
- Integrates with AWS SCT for schema conversion.
- Targets include RDS, Aurora, Redshift, S3, DynamoDB.
Learning AWS DMS equips you with a critical cloud migration skillβone that is indispensable in todayβs data-driven enterprises.