Skip to main content

Production Environment Configurations

Last updated May 2024

This section provides hardware specifications for different target volume levels. The system deploys to a dedicated server supporting the cluster management and other servers supporting other typical Momentum functionalities of an MTA node.

TIP: If running in cloud environments, CPU-optimized instances are recommended over general-purpose and memory-optimized instances.

Cluster Manager Node

The Cluster Manager is a dedicated node that aggregates the logs of all MTAs of the cluster and optionally centralizes some data storage in a PostgreSQL server. The Cluster Manager node is not intended to process any email traffic. The hardware specifications for this specific node are:

ResourceSpecification
CPU Cores8
CPU Speed3.2 GHz (min. 2.5 GHz)
Memory16 GiB RAM
Network Interface1 Gbps NIC
Storage2 x 600 GiB 15k RPM HDD in RAID1

MTA Nodes

The MTA nodes are the workhorses of the Momentum cluster. The following topologies are rated for use with CPU utilization at 50% in order to accommodate traffic spikes. All volumes are specified with the assumption of an average message size of 100 kiB.

NOTE: The Cluster Manager node is not counted in the following configurations.

TIP: More the number of CPU cores in the configurations below, higher performance ratings than listed can be achieved with the Supercharger feature, i.e., configuring multiple event loops.

Enterprise Basic

The Enterprise Basic configuration consists of three nodes running all MTA roles with the resources specified below.

Performance Ratings

MTA Node CapacityCluster Capacity
(2 Nodes Operational)
Peak Cluster Capacity
(3 Nodes Operational)
1.5 M msgs/hr3 M msgs/hr4.5 M msgs/hr

Hardware Specifications

ResourceSpecification
CPU Cores8
CPU Speed3.2 GHz (min. 2.5 GHz)
Memory32 GiB (min. 16 GiB) RAM
Network Interface1 Gbps NIC

Storage Configuration

ArrayMount PointsConfiguration
All Storage 6 x 300 GiB 15k RPM HDD
Message Spools*/var/spool/ecelerity2 x 300 GiB in RAID1
OS
App Binaries
Logs
Platform DB
Analytics DB
/ (root)
/opt/msys
/var/log/ecelerity
/var/db/cassandra
/var/db/vertica
2 x 300 GiB in RAID1

(*) This array should be dedicated to the spools.

Enterprise Standard

The Enterprise Standard configuration consists of three nodes running all MTA roles with the resources specified below.

Performance Ratings

MTA Node CapacityCluster Capacity
(2 Nodes Operational)
Peak Cluster Capacity
(3 Nodes Operational)
3 M msgs/hr6 M msgs/hr9 M msgs/hr

Hardware Specifications

ResourceSpecification
CPU Cores16
CPU Speed3.2 GHz (min. 2.5 GHz)
Memory64 GiB (min. 32 GiB) RAM
Network Interface1 Gbps NIC

Storage Configuration

ArrayMount PointsConfiguration
All Storage 8 x 300 GiB 15k RPM HDD
Message Spools*/var/spool/ecelerity4 x 300 GiB in RAID10
OS
App Binaries
Logs
Platform DB
/ (root)
/opt/msys
/var/log/ecelerity
/var/db/cassandra
2 x 300 GiB in RAID1
Analytics DB*/var/db/vertica2 x 300 GiB in RAID1

(*) These arrays should be dedicated.

Enterprise Plus

The Enterprise Plus configuration consists of three nodes running all MTA roles with the resources specified below.

Performance Ratings

MTA Node CapacityCluster Capacity
(2 Nodes Operational)
Peak Cluster Capacity
(3 Nodes Operational)
6 M msgs/hr12 M msgs/hr18 M msgs/hr

Hardware Specifications

ResourceSpecification
CPU Cores32
CPU Speed3.2 GHz (min. 2.5 GHz)
Memory64 GiB RAM
Network Interface1 Gbps NIC

Storage Configuration

ArrayMount PointsConfiguration
All Storage 8 x 600 GiB 15k RPM HDD
Message Spools*/var/spool/ecelerity4 x 600 GiB in RAID10
OS
App Binaries
Logs
Platform DB
/ (root)
/opt/msys
/var/log/ecelerity
/var/db/cassandra
2 x 600 GiB in RAID1
Analytics DB*/var/db/vertica2 x 600 GiB in RAID1

(*) These arrays should be dedicated.

Enterprise Scaling Cluster

The Enterprise Scaling configuration consists of both an Analytics Cluster and a Platform Cluster. Because large volume deployments require more resources for sending than for analytics, Message Systems recommends separating the Platform and Analytics roles to separate clusters. This configuration allows you to scale the Platform cluster independent of the analytics cluster.

The baseline configuration consists of a three-node Analytics Cluster and a three-node Platform Cluster. You may scale sending capacity by incrementally adding Platform nodes to the cluster as needed.

Baseline Performance Ratings

Baseline Cluster Capacity
(2 Nodes Operational)
Baseline Peak Cluster Capacity
(3 Nodes Operational)
Incremental Platform Node Capacity
12 M msgs/hr18 M msgs/hr6 M msgs/hr

Hardware Specifications

ResourceSpecification
CPU Cores32
CPU Speed3.2 GHz (min. 2.5 GHz)
Memory64 GiB RAM
Network Interface1 Gbps NIC

Storage Configuration

Platform Node

ArrayMount PointsConfiguration
All Storage 6 x 600 GiB 15k RPM HDD
Message Spools*/var/spool/ecelerity4 x 600 GiB in RAID10
OS
App Binaries
Logs
Platform DB
/ (root)
/opt/msys
/var/log/ecelerity
/var/db/cassandra
2 x 600 GiB in RAID1

(*) This array should be dedicated to the spools.

Analytics Node

ArrayMount PointsConfiguration
All Storage 4 x 600 GiB 15k RPM HDD
OS
App Binaries
Logs
/ (root)
/opt/msys
/var/log/ecelerity
2 x 600 GiB in RAID1
Analytics DB*/var/db/vertica2 x 600 GiB in RAID1

(*) This array should be dedicated to the Analytics DB.

Was this page helpful?