Summary: An overview of the Alooma architecture, including descriptions of the data flow from input, through the Alooma service, to output.
The basic data flow in Alooma starts with the data source, or "input" (on the left in the diagram above). Data goes through the Alooma Service (transformations, encryption, etc.) into Alooma's staging, and from there, gets loaded into the customer's target, or "output".
Each of these stages is described in more detail below.
Data sources (called inputs in the user interface) are treated differently, based on type. Generally, data from databases, files, applications, repositories, etc. is requested/pulled regularly, depending on how the input is configured.
Data streams/events are pushed to Alooma as they are generated.
All connections are SSL encrypted, and Alooma supports additional secure connectivity via SSH/Reverse-SSH tunnel, VPC peering, and site-to-site VPN.
Within the Alooma service data is transformed and prepared for loading into the target (output) before being sent to Alooma's staging bucket. While in staging, data is encrypted using a customer-specific key.
For longer-term data retention, Alooma supports customers configuring their own (additional) S3 bucket. Alooma provides the option to automatically store all events’ raw data. Events are stored as they are received, before being processed by Alooma.
The data is loaded from Alooma's staging into the target (called an output in the user interface): the customer's cloud data warehouse (Azure SQL Data Warehouse, Google BigQuery, Amazon Redshift, Snowflake, etc.) or other kind of storage (S3, etc.). The data remains encrypted (SSL) during transit.