In the previous article, I’ve discussed automation techniques leveraging security and developer-centric multi-stack build and deployments. In this one, I will focus on an application state setup automation. Before deep diving, let first define what the application state is.
Application state is data driving your application logic,which may originate in various sources, like database, data-stores, message bus, configuration files, or even external data APIs. Setting it can be a time consuming and challenging process, contributing to substantial development overhead.
For example, to build a patch for your application glitch, you may need to reproduce it first on a dedicated environment with production-like data. While Dev-ops or DBA can help or manage this, I believe that developer should lead an effort to automate application state setup into reusable workflows. Not only would it reduces frustration with the usually lengthy and mundane nature of this process but also provide ins-and-outs about how to manage the application state. All in all, you want any system issues to be addressed quickly and with ease.
Besides addressing the ongoing software issues, automated system state setup comes handy also when extending or implementing new application features. On top of that, you can even start utilizing test-driven or behavior-driven development techniques. In either way, you would set the state of our application first, followed by writing the application logic. All these results in high-quality application code that is working as expected ahead of hitting staging or production system.
Let me show you how to use endly automation runner to accomplish various state setup related tasks. Long story short, not only endly allows you to use SSH executor service to script any logic securely, but it provides unified setup and testing methodology. There is a little or no much difference in setting up a state for traditional DBMS like Oracle, MySQL or Cloud Database / Data store like BigQuery, DynamoDB or Firebase. What’s more setting up and pushing messages to a message bus is also unified across various vendors like AWS SQS, Google PubSub or Kafka.
Using containerized instance for your application data layer is an excellent choice providing a convenient way for the clean or predefined state setup.
While using docker-compose or docker CLI with SSH executor is reasonable option, the following example show direct endly docker API integration workflow that deploys Aerospike and MySQL securely without compromising any credentials.
Typical database setup entails deploying specific database version followed by loading database schema and predefined data.
For example the following workflow creates PostgreSQL mydb database and load schema with data located in mydb/data folder. Each file correspond to table name in destination database.
In the snippet above, predefined data is located in mydb/data folder. It can use JSON or CSV format, where each file name is mapped directly to a table name. What’s more, when preparing an application state, you can use a dynamic expression to create timestamp or other data points dynamically.
For example, the following JSON file defines data that uses $FormatTime, $Rand, and $uuid dynamic UDF expressions. In addition the first empty element in a set, requests data truncation prior data loading.
Similarly to database setup, you first start with a specific vendor deployment followed by optional schema and data loading.
What’s more, endly uses the same service to deal with Cache, NoSQL or RDBMS data storage system providing a unified user experience.
For example the following workflow deploy and populate MongoDB data-store.
Some key-value data-store vendor like Aerospike supports typed safe nested data structure. For example, a column can be defined as Map<Integer, Map<String, Integer>. Setup, in this case, might be problematic with pure JSON data file since a key always has to be of a string type. To address data type representation you can use casting expressions $AsInt, $AsFloat.
For example the following Aerospike setup data file uses events map where key is of integer data type.
Cloud data storage
The presented previously data loading techniques can be also used with cloud managed data stores like BigQuery, DynamoDB, Firestore or Firebase. In addition you can also use cloud API to perform certain setup task.
For example, if you need to restore production BigQuery dataset to a testing project, you can use BigQuery copy task. The following workflow presents how to get this implement with selecting and copying dataset tables.
When working with a message bus, you can automate deployment of your topic/queue/subscription , followed by publishing data. Data can be either loaded vi external URL or in-lined as part of automation workflow.
where data.json uses the following format:
AWS Simple Message Queue
Generating Test Data
When setting application state, you may also need to generate a dummy data.
The following workflow allows you to generate a test data file in JSON format.
Many application uses config to manage application behavior. With endly you can easily upload existing configuration file to local file system or to cloud storage expanding any dynamic part with $ expression, or you can assemble the whole config in the automation workflow.
For example, the following workflow copy config.properties to a war file, and substitute $changeMe content with the one specified in init section.
Take another example, when the configuration file is dynamically assembled:
In this article, I’ve presented an automation technique leveraging application state setup across wide spectrum of storage solution. Setting state is critical for both quick bug identification and rapid sofware development. Thanks to this, developer is able to focus more on developing application logic and spent less time on chasing errors. All in all automation directly translates to higher quality deliverables and engineering confidence.
All presented automation workflows are available in Software Development Endly Automation Workflow Templates git repository.