1. Amazon MQ


What is Amazon MQ?

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ. It is designed for migrating existing on-premises message broker workloads to AWS without rewriting application code.

When to Use Amazon MQ

Use Amazon MQ ONLY when migrating existing applications that use standard broker protocols (AMQP, MQTT, STOMP, OpenWire, WSS). If building new cloud-native applications, use SQS and SNS instead — they are more scalable, cheaper, and fully serverless. Amazon MQ is a migration service, not a cloud-native choice.

Supported Engines

Key Characteristics

  1. Managed service: AWS handles provisioning, patching, and maintenance
  2. Runs on EC2 instances under the hood (you choose instance type)
  3. NOT serverless — you provision broker instances
  4. Supports both queue (point-to-point) and topic (pub/sub) patterns
  5. Multi-AZ deployment with automatic failover (active/standby)
  6. Encryption at rest (KMS) and in transit (TLS)
  7. Runs in your VPC (private connectivity)
  8. EBS or EFS storage for message persistence


Amazon MQ Deployment Modes

Amazon MQ vs SQS/SNS

2. Amazon MSK (Managed Streaming for Apache Kafka)


What is Amazon MSK?

Amazon MSK is a fully managed service for Apache Kafka. It makes it easy to build and run applications that use Apache Kafka to process streaming data, without managing Kafka infrastructure.

When to Use MSK

Use MSK when your application is already built on Apache Kafka or when you specifically need Kafka features (consumer groups, partitions, log compaction, Kafka Connect, Kafka Streams). For new cloud-native streaming, consider Kinesis Data Streams instead — it’s simpler and fully serverless.

Key Characteristics

  1. Fully managed Apache Kafka: AWS manages brokers, ZooKeeper/KRaft, patching, HA
  2. Runs on EC2 instances (you choose instance types) in your VPC
  3. Data replicated within the cluster (configurable replication factor)
  4. Multi-AZ deployment (2 or 3 AZs)
  5. EBS storage for broker data (auto-scaling available)
  6. Supports Apache Kafka APIs and tools natively (Kafka Connect, Kafka Streams, MirrorMaker)
  7. Encryption at rest (KMS) and in transit (TLS)
  8. IAM, TLS, or SASL/SCRAM authentication


MSK Serverless

  1. Run Kafka without managing or provisioning brokers
  2. Auto-scales compute and storage based on throughput
  3. Pay per data in/out and per partition-hour
  4. Simpler but less configurable than provisioned MSK
  5. Best for: variable workloads, getting started with Kafka on AWS


MSK Connect

  1. Managed Kafka Connect: run connectors to stream data between Kafka and external systems
  2. Source connectors: pull data into Kafka (e.g., from databases, S3, etc.)
  3. Sink connectors: push data from Kafka to destinations (e.g., S3, OpenSearch, DynamoDB)
  4. Deploy community or custom connectors without managing infrastructure
  5. Auto-scales connector workers based on workload

3. MSK vs Kinesis Data Streams

Exam Tip

MQ & MSK: "Migrate RabbitMQ/ActiveMQ to AWS" = Amazon MQ. "MQTT protocol" = Amazon MQ (ActiveMQ). "Migrate Kafka to AWS" = MSK. "New cloud-native queue" = SQS. "New cloud-native streaming" = Kinesis. "Kafka Connect" = MSK Connect. MSK Serverless = no brokers to manage. Amazon MQ is NOT serverless. Both run in VPC.