Diving into the Cloud
Join Thousands Of Other Technical Founders, Innovators, And Marketing Leaders And Get The Latest Growth Insights.
Before the Cloud
Large capital outlays to begin a digital transformation have historically been a barrier for small and medium sized businesses to compete with larger corporations. The cost of servers, data center space, and skilled personnel to configure and manage hardware alone can be enough of an expense to pull the plug on a project before it even begins. Software services like Google Docs, Microsoft 365, and SteadyHOPS help reduce costs and make the implementation of basic business processes such as data privacy compliance, feasible for these organizations. Providing automation, management, and collaboration capabilities to more complex business processes like was still largely out of reach due to up front costs before the emergence of the Cloud. Even if the cost barrier was overcome, geographical constraints could limit the availability of offerings. Applications running out of a data center in Florida aren’t going to be very responsive to users in China, and companies wanting to provide application services to geographically distributed employees or customers would have to produce even more capital outlays to stand up data centers close enough to their user base. The software industry has come a long way, and with a combination of agile software development frameworks and cloud services, small and medium sized businesses now have a much more clear cut path to providing software services that function with geographically distributed customers or employees.
Agile, Quick and Dirty
Before we dive into the Cloud, a quick and dirty overview of how Agile enables these digital transformations can help set the stage. Before Agile there was the Waterfall approach to project management and systems development. This is just a fancy way of saying that the entire project was planned out in advance before work was begun. After the project plan was complete, technical resources would begin designing and building out the system, trying to meet the guesstimated deadline set in the project plan. If the project was estimated poorly, which it often was, and the plug had to be pulled due to cost or time overruns, there may not have even been enough progress on building the system to have anything to show for all the efforts. On top of the inherent risks, and long timelines, it was feast or famine from a resource utilization perspective. Business analysts would have more work than they could handle during the project planning, while technical resources were under utilized. Conversely, during the building of the system, technical resources would be overloaded while the business analysts were under utilized. The client or other stakeholder awaiting the project completion would also not see a delivered piece of software for 6+ months. The main reason this was the de facto way of managing a software project for so long was because of how software updates were published. Most software used to be developed as a desktop application. Once upon a time you installed software from a physical media device such as a CD or a floppy disk instead of downloading it online like we do now. The software would connect to the internet periodically and download a small update for itself. End users didn’t want to be bothered with having to install a software update for their desktop applications all the time, so companies would try to release updates once or twice a year, which synergized well with the waterfall approach.
Nowadays, the majority of software applications are browser or app based. Software is upgraded silently and constantly, sometimes as frequently as every day. You would never know that an update took place most of the time, unless you notice a new button on the screen or you read product’s team’s email discussing the new features. With the ability to push updates so quickly, came the agile development frameworks. The main principles you need to know are that software is developed in small chunks of 2-4 weeks’ time lines called sprints, and that it’s built in such a manner that at the end of the sprint, there is a delivered piece of software that functions and is ready to be used by the client or other end stakeholder. The amount of functionality that is delivered in this time frame is of course reduced, but in today’s fast paced world, this arrangement makes for happy clients, and happy employees. Clients have high priority bugs fixed and requested features in their hands much more quickly than in the old model. Working in 2-4 week sprints also makes estimates much more accurate and achievable by the resources involved in the output as well. This prevents employees from going through cycles of under utilization leading to boredom, and over utilization leading to excessive overtime while trying to make a deadline, and burning out. Utilization remains stable as the software grows and evolves according to the product owners direction. Agile is also great because it allows projects to fail quickly and gracefully. Market downturn happens, investors cut funds, and other business events can lead to the termination of a project. Using the agile frameworks means that when the plug is pulled, there is still delivered functioning software in the hands of the purchaser. It may not be everything that was desired at the beginning of the project, but there is still usable software, meaning return on investments to date can be generated. If the plug was pulled prematurely in the waterfall method, there was a high risk of total loss.
Agile is to Waterfall like the Cloud is to buying your own servers and data center and hosting your applications within them. Ok, ok, let’s clarify that last statement. When you buy a server to host an application and start developing it, it’s like buying all the food you plan on eating for the year and then starting to eat it. That’s crazy, right?? Well, the Cloud agrees. That’s why hosting applications in the Cloud allows you to pay for only the resources you need, like buying your meals as you get hungry and eating them then. Hosting in the Cloud allows you to pay for only the resources you consume, such as bandwidth, CPU time, and storage. This is the core of the value proposition of the Cloud; you incur costs only as you need to and this alone should be enough to get you excited about it. Just in case you need a little bit more to get excited about, there are many other benefits. Let’s touch on a few of them briefly.
Let’s say a server has an electrical failure, or a hard disk reaches it’s end of life and it can no longer allow your application to function. Well, cloud service providers offer a backup set of hardware and your application is brought right back online with no or minimal downtime, and with no action required on your part. Basic redundancy is available out of the box, but you can configure all sorts of additional redundancies from additional backup servers and hard disks to backup data centers in a different geographic location. These backups can be configured to be running all the time and be able to go live at the flip of a switch, or be sitting “cold” and require a boot-up and switch over resulting in some application downtime, but reduced costs. You would need to weigh the application criticality, cost tolerance, impact of downtime and other factors to decide what’s best.
Aaaa, too much growth, too much traffic – the best of problems! When using the old model of planning for and purchasing hardware up front, a key risk to consider is the cost of constrained growth if you underestimate your needs. For applications where revenue is directly tied to application up time and performance, this can be devastating. You don’t want the hard work of product development and marketing that went into bringing the season’s hottest Christmas toy to market to constrain your sales when it goes viral because you didn’t have enough server hardware to meet traffic demands, do you? Neither does the Cloud. That’s why you can configure automatic scaling of hardware. Cloud service providers can allow your application to seamlessly be granted more powerful server hardware or seamlessly bring new copies of additional servers (called instances) online to meet spikes in traffic demand. Once the demand cools off, the number of servers and the power of the servers is reduced back down to normal. Costs are incurred for the additional resources of course, but automatic scaling is certainly a very powerful feature and adds to the value proposition of the Cloud. Did I mention you can configure alerts to be notified when all of this happens?
These sorts of features are things that any company with deep pockets, and the ability to hire and attract resources with the skill to configure these features, can do. The Cloud however greatly reduces the resource burden of standing up software. Cloud service providers have small armies of technical resources configuring, verifying, and managing all the complexities that go into standing up these networks of computers to host applications. You can think of using the Cloud as renting a small fraction of the labor that goes into managing these services as well as the physical hardware. This allows you to focus your efforts on managing your business and developing your applications to support it.
Application Resource Optimization
Cloud services providers offer a range of services that minimize the cost of certain aspects of an application when used correctly. For instance, using a database to store documents would be a sub-optimal way to store documents and would result in increased cost. Using a BLOB storage or file storage service would be a correct and lower cost way to store documents such as PDFs in the Cloud. In this regard, it’s important to carefully consider application architecture decisions when migrating to the Cloud or developing a new application. Incorrect architecture decisions become more and more difficult to change as an application grows, so it’s advisable to engage a partner with expertise in the Cloud earlier in the process when mistakes are cheaper to fix. Correct architecture decisions and a consultation will pay for themselves several times over during an application’s life cycle, and an optimal architecture can serve as a competitive advantage.
What about Tortoise and Hare?
The Cloud brings some amazing benefits, and we have just scratched the surface of the value that can be extracted from leveraging it. Bringing your applications into the Cloud can be a daunting task for those unfamiliar with the process, and partnering with a trusted provider of expertise in the domain can be a great way to kick start your journey. Tortoise and Hare Software has extensive experience both in developing software applications and bringing them into the Cloud. We know, SteadyHOPS, our very own data privacy request compliance solution lives in the cloud on the Microsoft Azure platform. We ensure our application complies with all applicable data privacy regulations and are well equipped to help yours comply to. Contact us today for a free consultation.
The General Data Protection Regulation (GDPR) and Data Protection Act of 2018 (DPA) are complex, in depth, complementary legal documents which act as a code of conduct for businesses involved in the processing of personal data. Henceforth these regulations will be referred to as the GDPR. There are many aspects of compliance with these regulations and the best place to keep up to date and understand aspects of compliance is the Information Commissioner’s Office’s (ICO) Guide to General Data Protection Regulation. This article highlights the aspects of compliance that SteadyHOPS provides.