Skip to content
Mountain outlook
Nick KingJune 6, 2023 9:30:00 AM EDT6 min read

The 90/90 rule: Why ChatGPT was able to create a new application ecosystem so quickly. The case for repeatable building blocks for Applied AI.

So your team has been tirelessly working on the most critical AI transformation project in the company. It’s intense, there’s a lot of pressure, and you’ve hired the best talent you can find. You sit down for your first quarterly check-in with your CEO and your prepared progress report. It goes something like this:

 

Lots of data prep, successful deployment of a complex pipeline, some porting issues between the Data Science team’s notebooks, and the AI engineering teams preferred environment, but the model is showing strong progress, maybe some concerns about bias in the data.

Sound familiar?

 

And in reality here’s the conversation your CEO is actually wanting to have:

 

  • We’ve discovered a new way of working with the teams in supply chain so they can work XX% faster with the insights the model has provided
  • We learned that YY was actually the factor that was causing teams to over pay for supplies
  • If we change this business process, we can save $ZZ, and reduce the time it takes for us to deliver our new products
  • We have a proposal for something we didn't think of before we discovered while iterating outcomes

 

Notice how not anything sounded like what was keeping our teams up at night that first quarter? Maybe you did hit some of the above, well done!

You’ve probably heard of the 80/20 rule, or the 90/10 rule. Today let’s re-introduce the 90/90 rule. Shout out to Gonzo who shared this with me over a beer a few months ago.

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time — Tom Cargill, Bell Labs

 

It turns out this is the case over and over again with many data science projects today. There’s another impact here too. You know that feeling at the end of the project the whole team is exhausted, deadlines are looming, maybe you end up shipping something that’s not quite done, but you have to decide do you push your team harder and risk burning them out? Or you cut some of the deliverables to enable an MVP to ship on the deadline?

 

Often deciding what to cut and ship is the magical art of shipping software and ultimately outcomes. Steven Sinofsky gives a great breakdown on this on his blog, it’s definitely worth a read when thinking about the trade offs and decision process.

Wait what, there’s a better way?

 

What if you were able to jump over that first 90% and focus on that last 90% — building all the things your CEO was really wanting to talk about in our hypothetical meeting earlier. Enter ChatGPT.

ChatGPT was able to jump the first 90% and allow developers who may not have had access to build it alone to focus on the last 10% of delivery. This outcome isn’t unsurprising, in many ways this is the natural evolution of the Applied AI landscape.

Much like software has evolved over time to support new abstractions that accelerate development speed, and free up cognitive load to think about other things. I don’t think many of us these days crack open an Assembly Language Editor to finish off an application. But that was definitely happening when PC’s where first introduced. [Trigger warning: Polarizing software mention coming] Remember Adobe Flash? There was a time when it ran the a large part of the web, and I’d argue it enabled non-web developers and designers to translate their desktop client apps to be closer to the modern web apps we see today.

Why building blocks like ChatGPT work

It turns out while each business problem is unique, from data types, industry, desired outcomes, and more, most problems can be deconstructed into common logical building blocks. And those repeatable building blocks can be reused and managed to help experts like yourself build AI Applications faster.

There’s even a psychological reason called Information Process Theory that explains why your brain approaches things this way: Your brain has three memory systems sensory, short, and long term. Generally we load our long term memory for complex tasks such as writing software and bounce between short and long term memory to innovate new ideas; and you have a finite number of cycles per day.

So what does this mean for you?

First you want to apply those precious brain cycles to the important problems — you know like solving the things your CEO asked for in our earlier meeting. Second it’s important to build out a catalog of foundational models and building blocks to accelerate your AI application development. Third and most importantly you must become an expert on the last 10%.

Bringing together your building blocks

The next challenge is how to assemble your building blocks. Breaking down multiple ways to solve the first 90% quickly and importantly in a repeatable fashion.

vadim-velichko-u5Pkho3U9-A-unsplash

At this stage it’s worth identifying which toolsets can enable your teams. Generally I would recommend evaluating several of the following, and defining an official set of tools that will be used for production AI use cases. Maintaining the approved list is key, even though it might be tempting to let teams use their favorite tools, maintaining platform consistency will have significant benefits such as skill coverage, and supportability as your teams move around and you create future projects.

So what do you have to choose from?

  • AutoML — great for discovery, iteration, and building applications quickly. In many use cases I’ve seen AutoML solve the majority of common AI tasks, and in some platforms can do so while putting handy guardrails in place to prevent user self-inflicted pain.
  • Pre-trained and Cloud Hosted AI — Leveraging services from your Cloud provider for things like sentiment, and speech to text can drive efficiencies.
  • Model Repo’s — such as Hugging face have quickly become my go to when rapidly building complex applications. And you get the benefit of open source along side several pre-trained options.
  • Declarative AI — in many ways Declarative AI has a lot of promise to solve Applied AI problems quickly and with the enterprise controls some of the other options lack. Uber released to open source its deep learning platform Ludwig in 2019, Apple, and LinkedIn have similar projects.
  • Foundational Models — and finally technology such as ChatGPT, or Github co-pilot to provide transferable learning outcomes directly into applications.

Becoming the 10%

In previous posts I’ve talked about the need to develop Solution Literacy. Focusing on asking the right questions of the business, and using those insights to develop an ideal Applied AI solution.

Taking that one step further, I’ve seen many successful teams regularly assess where their teams time is being deployed and then actively look to develop way to shift their time to the impactful 10%. It’s never going to be perfect, but visibility goes a long way to taking corrective action.

Finally, set a regular meeting, every 4–6 weeks seems to be the right cadence, and review with your teams how to grow, manage, maintain, and build from your suite of building blocks. You just might surprise yourself how much faster you can operate, and be able to answer the questions your CEO is asking at your next quarterly update.

 

RELATED ARTICLES