Production Environment Configurations
This section provides hardware specifications for different target volume levels. The system deploys to a dedicated server supporting the cluster management and other servers supporting other typical Momentum functionalities of an MTA node.
TIP: If running in cloud environments, CPU-optimized instances are recommended over general-purpose and memory-optimized instances.
The Cluster Manager is a dedicated node that aggregates the logs of all MTAs of the cluster and optionally centralizes some data storage in a PostgreSQL server. The Cluster Manager node is not intended to process any email traffic. The hardware specifications for this specific node are:
Resource | Specification |
---|---|
CPU Cores | 8 |
CPU Speed | 3.2 GHz (min. 2.5 GHz) |
Memory | 16 GiB RAM |
Network Interface | 1 Gbps NIC |
Storage | 2 x 600 GiB 15k RPM HDD in RAID1 |
The MTA nodes are the workhorses of the Momentum cluster. The following topologies are rated for use with CPU utilization at 50% in order to accommodate traffic spikes. All volumes are specified with the assumption of an average message size of 100 kiB.
NOTE: The Cluster Manager node is not counted in the following configurations.
TIP: More the number of CPU cores in the configurations below, higher performance ratings than listed can be achieved with the Supercharger feature, i.e., configuring multiple event loops.
The Enterprise Basic configuration consists of three nodes running all MTA roles with the resources specified below.
MTA Node Capacity | Cluster Capacity (2 Nodes Operational) | Peak Cluster Capacity (3 Nodes Operational) |
---|---|---|
1.5 M msgs/hr | 3 M msgs/hr | 4.5 M msgs/hr |
Resource | Specification |
---|---|
CPU Cores | 8 |
CPU Speed | 3.2 GHz (min. 2.5 GHz) |
Memory | 32 GiB (min. 16 GiB) RAM |
Network Interface | 1 Gbps NIC |
Array | Mount Points | Configuration |
---|---|---|
All Storage | 6 x 300 GiB 15k RPM HDD | |
Message Spools* | /var/spool/ecelerity | 2 x 300 GiB in RAID1 |
OS App Binaries Logs Platform DB Analytics DB | / (root)/opt/msys /var/log/ecelerity /var/db/cassandra /var/db/vertica | 2 x 300 GiB in RAID1 |
(*) This array should be dedicated to the spools.
The Enterprise Standard configuration consists of three nodes running all MTA roles with the resources specified below.
MTA Node Capacity | Cluster Capacity (2 Nodes Operational) | Peak Cluster Capacity (3 Nodes Operational) |
---|---|---|
3 M msgs/hr | 6 M msgs/hr | 9 M msgs/hr |
Resource | Specification |
---|---|
CPU Cores | 16 |
CPU Speed | 3.2 GHz (min. 2.5 GHz) |
Memory | 64 GiB (min. 32 GiB) RAM |
Network Interface | 1 Gbps NIC |
Array | Mount Points | Configuration |
---|---|---|
All Storage | 8 x 300 GiB 15k RPM HDD | |
Message Spools* | /var/spool/ecelerity | 4 x 300 GiB in RAID10 |
OS App Binaries Logs Platform DB | / (root)/opt/msys /var/log/ecelerity /var/db/cassandra | 2 x 300 GiB in RAID1 |
Analytics DB* | /var/db/vertica | 2 x 300 GiB in RAID1 |
(*) These arrays should be dedicated.
The Enterprise Plus configuration consists of three nodes running all MTA roles with the resources specified below.
MTA Node Capacity | Cluster Capacity (2 Nodes Operational) | Peak Cluster Capacity (3 Nodes Operational) |
---|---|---|
6 M msgs/hr | 12 M msgs/hr | 18 M msgs/hr |
Resource | Specification |
---|---|
CPU Cores | 32 |
CPU Speed | 3.2 GHz (min. 2.5 GHz) |
Memory | 64 GiB RAM |
Network Interface | 1 Gbps NIC |
Array | Mount Points | Configuration |
---|---|---|
All Storage | 8 x 600 GiB 15k RPM HDD | |
Message Spools* | /var/spool/ecelerity | 4 x 600 GiB in RAID10 |
OS App Binaries Logs Platform DB | / (root)/opt/msys /var/log/ecelerity /var/db/cassandra | 2 x 600 GiB in RAID1 |
Analytics DB* | /var/db/vertica | 2 x 600 GiB in RAID1 |
(*) These arrays should be dedicated.
The Enterprise Scaling configuration consists of both an Analytics Cluster and a Platform Cluster. Because large volume deployments require more resources for sending than for analytics, Message Systems recommends separating the Platform and Analytics roles to separate clusters. This configuration allows you to scale the Platform cluster independent of the analytics cluster.
The baseline configuration consists of a three-node Analytics Cluster and a three-node Platform Cluster. You may scale sending capacity by incrementally adding Platform nodes to the cluster as needed.
Baseline Cluster Capacity (2 Nodes Operational) | Baseline Peak Cluster Capacity (3 Nodes Operational) | Incremental Platform Node Capacity |
---|---|---|
12 M msgs/hr | 18 M msgs/hr | 6 M msgs/hr |
Resource | Specification |
---|---|
CPU Cores | 32 |
CPU Speed | 3.2 GHz (min. 2.5 GHz) |
Memory | 64 GiB RAM |
Network Interface | 1 Gbps NIC |
Array | Mount Points | Configuration |
---|---|---|
All Storage | 6 x 600 GiB 15k RPM HDD | |
Message Spools* | /var/spool/ecelerity | 4 x 600 GiB in RAID10 |
OS App Binaries Logs Platform DB | / (root)/opt/msys /var/log/ecelerity /var/db/cassandra | 2 x 600 GiB in RAID1 |
(*) This array should be dedicated to the spools.
Array | Mount Points | Configuration |
---|---|---|
All Storage | 4 x 600 GiB 15k RPM HDD | |
OS App Binaries Logs | / (root)/opt/msys /var/log/ecelerity | 2 x 600 GiB in RAID1 |
Analytics DB* | /var/db/vertica | 2 x 600 GiB in RAID1 |
(*) This array should be dedicated to the Analytics DB.