Painting Yourself Into Corners

Painting Yourself Into Corners
Pan Am No-Gi, 2021

In Brazilian Jiu-Jitsu, very often, a new white belt will learn a technique in training, and practice that technique with their partners repeatedly. The student will memorize all the details of the technique and, with a willing training partner (known as an uke), be able to pull it off by executing the move. However, when sparring or in a competition, the student may be unable to successfully use the technique, believing that the technique is deficient. What I have found through teaching Jiu-Jitsu is that the student becomes frustrated with the technique and thinks it is ineffective because they only learned how to execute the technique; they do not understand why it works.

Understanding why it works is essential because, in a real situation, there are several variables; one being that your opponent may also know the same technique and is actively resisting it — this will require adjusting the technique for the current situation for it to work. These adjustments would be based on your understanding of the fundamentals behind the technique. Once you learn and understand the fundamentals of why a technique works, you will be able to apply any number of modifications to that technique to fit the situation you are in without relearning or refactoring your game.

As with Jiu-Jitsu, in modern software development and operations environments, we are often extremely focused on the details of implementing ideas. What coding language, what framework, which CI/CD pipeline we use, who our cloud provider is and so on. These details are important, require skill and practice to execute, and should not be overlooked. However, in our zeal to find the most clever, innovative or effortless way to implement these details, we often overlook the core fundamentals behind why we are doing it.

This has been the case with DevOps. Despite using DevOps for over 15 years, many of its practitioners do not understand what it actually is. Many will tell you that it is the use of certain tools, people in one role taking on the duties and responsibilities of another role, using particular platforms and so on. The DevOps vendor and consulting landscape is full of tools, shortcuts, automation and metrics. While using these will help you practice DevOps, their usage does not constitute “doing DevOps.”

Very few understand that DevOps is, at its core, an ideology of interoperation and cooperation, and eliminating silos between areas of concern, without eliminating the need for the special skills and experience particular to those areas of concern. Its goal is to inform the various stages of application development, deployment, operations and maintenance with feedback and context from them.

FinOps, as a more nascent discipline, is already starting to show signs of the same problem. Already, the FinOps vendor landscape is inundated with startups offering products to “allow your organization to do FinOps” and featuring AI/ML to eliminate the need for you to understand what is happening. While there may be many ways to differentiate most of these products, the one thing they have in common is that they are all optimization tools. These tools are fine, but do not mistake their use with practicing FinOps. These tools do not offer an understanding of the fundamentals behind their usage, or if the tools are being used in the best way. When it comes time to “do FinOps” in a different context, those tools may not be useful.

Like DevOps, FinOps is an ideology and a set of practices supporting that ideology. This ideology comprises the core fundamentals behind considering the impact of your architectural decisions on cost. It also includes forecasting and consideration of business financial goals as part of the requirements for application and infrastructure design and implementation. Tools in the vendor space are great for automating some of these practices, but they are going to realize a small fraction of what would be possible through understanding and practicing the fundamentals behind FinOps. This understanding is critical as this movement continues to establish itself; otherwise, a lot of organizations will find themselves doing FinOps wrong when they really need it and believe that the tools and techniques do not really work.

FinOps, like performance, resilience and security, involves a set of concerns that can primarily and far more easily be addressed on the whiteboard rather than the keyboard. This includes architecture, instrumentation, remediation and automation of these processes. We need to establish the architectural considerations and practice of the fundamentals of FinOps in every workload, rather than concentrating on optimizing after implementation. Running through architectural considerations as part of the whiteboarding process will make implementing FinOps processes seem much easier and effective.

Some important principles to consider involve minimizing. Use less data, move less data, store less data, crunch less data. Use the smallest amount of compute possible, and use the least expensive architecture possible. For example, does this workload need a GPU? Does it have to run on x86 or can you use ARM? Do you need the entire logging and metrics output, or can you filter out things at the source prior to aggregation? What data needs to move out to the internet? If we use a managed service, can you repatriate that workload without a large lift?

Some other considerations involve forecasting usage and spend. Serverless is easy, but it is expensive at scale. Understand that the cost of a service is more than just the CPU time and storage. It’s also the cost for metric and log aggregation, invocation times and rates, network costs and so on — all factors that need consideration before an infrastructure is provisioned so that you can adjust when usage surpasses cost thresholds.

What? You haven’t set cost thresholds yet? Before you can do that, you need to talk about how you will gather cost and cost-related metrics. Just like other aspects of your infrastructure, you will need to handle observability for cost. Waiting for your invoice at the end of the month is too late. Understanding the per unit costs of various aspects of your application will get you closer to real-time approximations of cost so you can make decisions based on them.

These questions and considerations are not all encompassing, but they are a start on the right path. Understanding more of the “whys” will give you better context, and therefore results from the “hows.” Only then will the full potential of the discipline be unlocked and more informed choices can be made about what tools are needed and used. If not, you may end up throwing a lot of money at one or more vendors and wondering why your optimization methods aren’t working like you had hoped.