At Gekkobrain we get asked a lot why we have built a DevOps tool now that we automatically convert ABAP and analyse your HANA readiness.
Surely DevOps is a bit too much to take in when operating a mission critical backend?
And although we like converting SQL and ABAP using our robotic algorithms we have come to realize that the same way that our robot estimates complexity the same way we want to increase the chance that a developer gets it right the first time.
Is DevOps not the opposite of CHaRM?
Many SAP companies are not typical early adopters of a ways of working which means hundreds of transports to production each day. And we can’t really think of a reason why any SAP customer would want to do that anyway.
But that’s what DevOps is about to many non-SAP teams believing in CI (continuous integration) and CD (continuous deployment). Teams that push, pull, stash, fork, merge, branch… well you get the point… teams that are anything BUT centralised and serialised.
But here’s the truth. DevOps and Git might be great for some SAP installations if the only focus was on delivering as much code to production as possible. We use it all the time, but we are a tech company and we run a number of production instances for different stages of operation. But changing your code several times an hour is not what the SAP community needs to learn about DevOps. The way that SAP has protected the integrity of the production environment is good. But as we all know speed is important and even more importantly, quality and being right the first time.
Back to the future
The ideal situation for a fast paced production pipeline with very high accuracy is to be able to work on a clone of production data, but then there would be no segregation of duty and the DevOps community early on realised that if cloning was out of the question then the developer needed an in-depth look at the production environment, but it was an inaccurate way of coding and inaccuracy leads to incremental adjustments and although Continuous Deployment and Continuous Integration is the gold standard of versioning it doesn’t solve runtime intelligence. How will this code in production? Will it be slow? Will the code sitting 12 levels down in the call stack dump if the dataset is too large? Is there an issue if no data is found or does it seem like one of the API needed are unstable? Wouldn’t it be great if you could go back in time and fix your mistakes?
So we thought long about this and then we developed a tool which in a nutshell offers the runtime intelligence to the developer BEFORE his code is released anywhere because that solves the issue of cloning the database and it removes the guess work from the equation.
It’s not just a whiteboxing APM tool, its information that will make you think twice about making an error which you never saw coming.
We call this dynamic code checks, but really, it’s a series of predictive algorithms that monitors the parts and properties of the source-code in development, correlates it with the model data it has built up from your productive system(s) and feeds information to the developer indicating potential dumps, potential weaknesses and potential scalability issues, ie. application performance predictions.
See for yourself
It’s a lot of information to take in when you see the tool for the first time, but seeing it in context ie. on your own source-code it almost instantly lights up a developer’s eyes.
We call this effect the “gamification” of SAP development, but don’t think for a second it’s just a game to us, but we love the fact that developers are finding new ways to improve their code without burning the midnight oil.
Because we’re committed to helping you reduce first time mistakes, prevent the repeated ones and give you the runtime intelligence to master your refactoring of old and slow code.
We call it DevOps, but it’s really a lot more than that. Its Ops-in-Dev-Ops.