Accountability as a Requirement

For those of you who do not know, I work at Oracle on their health platform. I’ve been working with this particular platform for my entire career; about nine years. There has been a lot of talk about AI in the health industry lately, and Oracle is exploring these options just as our competitors are. There are some interesting ideas being floated in this space, but generating code with AI to then ship to clients is currently not one of them. Technologies like GitHub Copilot (do they still call it that?) are not to be leveraged. The code is being crafted by hand, by humans.

If you’re wondering why this is, and why the code “AI” autogen technology isn’t sticking in this segment of the industry, it all comes down to accountability. The temptations to give a junior developer a gen AI tool and let them loose at a lower pay grade may be tempting for some managerial accountant types, but not tempting enough to summon the inevitable consequences. Software written for medical usage has weight. There are clear moral implications involved, and maybe more importantly to a business’s perspective, an entity that will ensure one is acting consistently. This entity is known as the United States government. To understand this further, let’s look into the engineering process, and how code is written and shipped in a medical context. 

1. A process is required that can be audited by the government

This one is big. Audits can happen at random throughout the year, and will definitely happen if the product being shipped ends up causing medical or financial trouble at a health institution. Teams have some flexibility as to how they define their process, but it needs to be repeatable and essentially bulletproof. This is done entirley to hold the teams accountable for their actions. If a bad defect is shipped, there will be specific requirements written about the defect, and tests will be written specific to that defect flow to ensure it never happens again. And yes, these tests will be ran against just about every build of the product going forward. I have seen regression tests in place that were written well over a decade ago. Never again is the theme here.

2. Requirements, requirements, requirements… and tech design

The world of Agile development exists where we are, but not to the ends of being fast and loose. Before beginning, one needs to agree on requirements; real requirements, written by humans, and verified by a tracing mechanism. Sometimes tracing can look like manual screenshots of an application workflow from end-to-end, other times it can be an integration test that lists a specific requirement ID from the documentation and gives a pass/fail result. Team members must then sign off on the work done via form before things are shipped. All of these steps are to hold teams accountable for what they ship (notice a trend here?). There are many critics that claim this part of the process is antiquated, slow, and striclty a waterfall ideology. When I hear these critiques I will reply: “antiquated, slow, waterfall, but done correctly.”  

3. Root cause analysis

The worst and most rare defects that one could ship will require homework to be done. Fixing the defect and shipping the fix is not enough. Something simiilar to a white paper will be drafted in the most serious of cases, outlining what went wrong, why it went wrong, and how it is to be fixed. This process could take days depending on the complexity of the defect shipped. Multiple engineers, tests analysts and managers will be involved. Medical institutions need assurance that the right thing is being done. The government needs to ensure that the parties involved will be held accountable. There is no abstracted layer of bullshit here; accountability is due.

The above text is a brief rundown of the overall picture. It should be enough to get the point across about AI generated code, though. That is, it simply will not fly when accountability is a requirement. No one wants to sign off on code that was generated by an AI if it cannot be fully explained. If the code can be fully explained, it likely won’t adhere to the requirements. If it can be fully explained and adheres to the requirements, it will not hold its own against an audit in the event of a defect. Stating that a piece of code was AI generated as a defense in the face of an audit is not an acceptable position, it is a fireable offense. 

•
•
•
•

Leave a Reply

Your email address will not be published. Required fields are marked *