I build distributed, high performance and availability cloud solutions.
    AWS or Azure, SQL or NoSQL, .Net and Java, data stream processing and much more.
    Let's find out how far we can go!


Maxim Shaw, Cloud application consultant.


A software technology lead with more than 14 years of experience lately focused on robust, high performance cloud solutions.
I help enterprise clients to challenge complexity of design, performance and scalability for the Cloud and build most efficient and cost/effective cloud solutions at the same time reducing a chain of responsibilities and delivery to a single point unlike big consultancy companies.
Early stage concepts and application prototyping, MVP and production implementation with following deployments. Knowledge and intensive experience in applying enterprise design patterns and practices helped clients to quickly release software to back up their sales campaigns and other marketing initiatives.

Constant research and acquired design knowledge empowered me to successfully design and implement challenging projects for Gap, PacificMetrics (ACT), AMD, BellMedia, JustEnergy, Validus Holdings etc.
I am here to share my experience and skills and help to build high class cloud solutions.

.Net/Core and Java, AWS and Azure, SQL and NoSQL, let's find out what would be the best for your business case and build a tool that will serve your business needs.



Good Service-Oriented Architecture is the most important part of the solution. This is a foundation that the entire further work will depend on. Correctly done it will make a solution stand in a good health for a long time, use cloud resources efficiently, scale properly and easy extendable.
Having built and tuned several high performance cloud solutions I have mastered my ability to make it right from the start!


It is easy enough to learn to write code in weeks. To write code that is easy to read, easy to extend and maintain, the code that uses a framework to its full capacity, the code that is testable, the code that can handle complex structures elegantly and wise takes years of practice, talent and determination.
I have opened the art of coding at age of 12 and for more than last 13 years I build solutions on a daily basis. Code is a part of me and implementation of a great design is a pleasure I enjoy.

CI & Deployment

Deployment of a complex cloud solutions can be very cumbersome. A lot of moving parts and dependencies is hard to remember and track. The more solution grows the more difficult to keep everything moving smoothly. Unexpected crush or missed release can cost too much to ignore its importance.
To avoid pitfalls I use Continuos Integration tools to streamline deployments and make them risk free as much as possible.


Improved architecture prototype for PacificMetrics’ “Unity” project

The project is an educational formative and summative assessment system. The system is a high availability and high fault tolerance cloud solution with the requirement of 250K requests per second for writes at the time of my engagement on it. There are 2 types of NoSQL data store clusters were maintained for this system. One as a source of truth with high write capability. Another is a data mirror for a sophisticated search capability but with a significant downside of a slow write response. The client, struggling with the performance of sequential, synchronous writes into two data stores, has turned to an agency where I was working at that time to help them overcome the performance bottleneck.

The original architecture diagram

Original sequential data writes into Cassandra and Elasticsearch.

I was given an opportunity to rethink the architecture and come up with a "Proof Of Concept" of improved solution. Since the search data storage was slow to write it was obvious that writes into this storage cluster should be asynchronous providing eventual consistency with original write storage. Likely the eventual consistency was not a condition. This is where CQRS and improved cloud architecture came into play and changed performance drastically. The monolithic application that included data writes and reads in the same domain was rethought and rebuild. Business logic for writes and reads was segregated into separate applications that could be hosted independently on several cloud instances to provide estimated demand for writes and reads requests respectively to their needs. For instance, with a very rough calculation of 5.5K writes per second on a single instance without running much of additional logic we will need ~45 instances to serve 250K requests per second*. However, if we need to add data synchronization with the search data cluster for the future data mining and analysis the numbers of writes per instance will go down due to the higher latency of the search store. Increased number of write instances will not resolve the low performance situation since the data search cluster synchronization becomes a bottleneck for the entire system. Thus, synchronization should be done asynchronously without awaiting for the operation to complete response. To achieve this goal I used AWS SQS (Simple Query Service) as a Message Queue. SQS provides a highly fault tolerant messaging system that will relief a worry of losing data making sure the data will be eventually consistent. The request execution flow with the new architecture is reflected in the following diagram.

Improved architecture with eventual data consistency

Improved asynchronous writes into Cassandra and Elasticsearch.

The described architecture and proper implementation of it helped to eliminate performance bottleneck and employed the cloud infrastructure to its full potential.

Cloud infrastructure and tools used on this project: Apache Cassandra, Elasticsearch, AWS EC2, AWS SQS, Java with Jersey2 RESTful Web Service.

*Estimate is based on the Netflix research of 1M writes per second in the articles here and here.

An Azure cloud solution to process data for Gap’s “Passport to Summer” sweepstakes campaign

Goal: Increase social media presence and sales through customers

During summer months of 2016 Gap users in the US and Canada participated in sweepstakes campaign from Gap by posting or twitting photos with the brand and leaving a message or hashtag with a keyword to track their entry. Posts could be made in three social networks: Facebook, Twitter and Instagram. Every entry would increasing a chance to win a gift from the Gap at the same time helping Gap to burst out media presence of their brand and increase sales.

A backend solution that in nature included three solutions for each social network API would monitor followers of the brand and their activities in every social network. Collecting entries by hashtags and keywords and adding participants’ entry data into an SQL Azure database.

Once a day a data processing solution will kick and perform following steps:

  • 1. Select and upload all registered participants into Message Queue (Azure Service Bus)
    • - To achieve high performance of this operation a number of parallel processes would do it in bulks
  • 2. Message listeners will handle broadcasted participant identity and run data collection from his registered social networks
  • 3. Collected data will be compared to the previous day data, duplications excluded, new entries counted, total numbers recalculated, caps and other restrictions applied and new account summary saved
  • 4. Clear MQ when user accounts processed

Another application, that was out of my responsibility would run weekly draws on a users’ entry summary and select winners.

Technology stack and cloud infractructure: .Net 4.5, Azure Web Apps, Azure Scheduler, Azure ServiceBus, Facebook API, Twitter API, Instagram API

This project shows how a brand was able to outburst their social media presence and increase sales through its loyal customers.