IBM wants to teach AI the language of your business


We want to hear from you! Take our quick AI survey and share your insights on the current state of AI, how you’re implementing it, and what you expect to see in the future. Learn More


At VB Transform 2024, IBM‘s David Cox made a compelling case for open innovation in enterprise generative AI, building on the company’s long-standing commitment to open-source technologies. The VP of AI models and director at the MIT-IBM Watson AI Lab presented a vision that both challenges and inspires the tech industry.

“Open innovation is really the story of human progress,” Cox said, framing the concept as fundamental to technological advancement. Cox emphasized the critical nature of the current moment in AI development, stating, “I think this moment is especially crucial because we all have to make decisions about where we want to invest. How do we want to avoid lock in?”

All kinds of open

The IBM executive highlighted a nuanced view of openness in AI, challenging the notion that it’s a simple binary concept. “Open isn’t just one thing. It can mean lots of things, actually,” Cox explained. He pointed out the growing ecosystem of open models from various sources, including tech giants, universities and even nation-states.

However, Cox raised concerns about the quality of openness in many LLMs. “In some cases, you’re getting something that’s more like a binary,” he cautioned. “You’re getting a sort of bag of numbers, and you don’t know how it’s produced.” This lack of transparency, Cox argued, can make it difficult or impossible to reproduce these models, undermining a key tenet of open-source principles.


Register to access VB Transform On-Demand

In-person passes for VB Transform 2024 are now sold out! Don’t miss out—register now for exclusive on-demand access available after the conference. Learn More


Drawing parallels with traditional open-source software, Cox outlined several characteristics that have made such projects successful. These include frequent updates, structured release cycles, regular security fixes and active community contributions. He noted: “Everything is well defined as it doesn’t change dramatically from version to version, there can be incremental contributions, both from within a company and also across the entire community.”

LLMs: Open in name only?

Cox then turned his attention to the current state of open LLMs, pointing out that many lack these essential open-source properties. “Open LLMs, as great as they are — and they’re fantastic — don’t have a lot of these properties today,” he observed. He criticized the irregular release patterns of some companies, saying that companies can drop “new generation models whenever they feel like it. Some model providers release a model and never come back and release an update to it.”

This approach, Cox argued, falls short of true open-source principles and limits the potential for community-driven improvement and innovation in AI. His insights challenge the AI industry to reevaluate its practices around open-source models, calling for more standardized, transparent and collaborative approaches to AI development.

To illustrate his point, Cox highlighted IBM’s own efforts in this direction with their Granite series of open-source AI models. “We release fully everything that’s in the model,” Cox explained, emphasizing IBM’s commitment to transparency. “We’ll tell you exactly what’s there, we’ve actually open sourced all of our processing code so you can know exactly what we did to it, to remove any objectionable content, to filter it for quality.”

This level of openness, Cox argued, doesn’t come at the expense of performance. He presented benchmarks comparing Granite’s code model against other leading models, stating, “These are state of the art models… You don’t have to have opaque models to have highly performed models.”

The enterprise data gap

Cox also proposed a novel perspective on LLMs, framing them primarily as data representations rather than just conversational tools. This shift in understanding comes at a crucial moment, as estimates suggest that within the next 5 to 10 years, LLMs will encompass nearly all publicly available information. However, Cox pointed out a significant gap: The proprietary “secret sauce” of enterprises remains largely unrepresented in these models.

To address this, Cox suggested a mission to represent enterprise data within foundation models, thereby unlocking its full value. While techniques like retrieval-augmented generation (RAG) are common, Cox argued they fall short in leveraging an enterprise’s unique knowledge, policies and proprietary information. The key, he contends, is for LLMs to truly understand and incorporate this enterprise-specific context.

Cox outlines a potential three-step approach for enterprises: finding an open, trusted base model, creating a new representation of business data, then deploying, scaling and creating value. He emphasizes the critical importance of carefully selecting the base model, particularly for regulated industries. Transparency is crucial, as “there are a number of properties that an enterprise needs across a wide variety of industries, regulated industries, other industries where it needs to be transparent and in many cases, models won’t the model providers won’t tell you what data is in their model,” Cox said.

The challenge lies in successfully mixing proprietary data with the base model. To achieve this, Cox argues that the chosen base model must meet several criteria. It should be highly performant as a baseline requirement. More importantly, it must be transparent, allowing enterprises to understand its contents fully. Obviously, the model should also be open-source, providing the flexibility and control that enterprises need.

Teaching AI your business secrets

Building on his vision for integrating enterprise data with open-source LLMs, Cox introduced InstructLab, a collaborative project between IBM and Red Hat that brings this concept to life. This initiative, first reported by VentureBeat in May, represents a practical implementation of Cox’s three-step approach to enterprise AI adoption.

InstructLab addresses the challenge of incorporating proprietary enterprise knowledge into AI models. It offers a “genuinely open-source contribution model for LLMs,” as Cox described it.

The project’s methodology revolves around a taxonomy of world knowledge and skills, enabling users to precisely target areas for model enhancement. This structured approach facilitates the integration of enterprise “secret sauce” that Cox highlighted as missing from current LLMs. By allowing contributions through simple examples or relevant documents, InstructLab lowers the barrier for domain experts to participate in model customization.

InstructLab’s use of a “teacher” model to generate synthetic training data addresses the challenge of mixing proprietary data with base models. This innovative approach maintains model performance while adding enterprise-specific capabilities.

Notably, InstructLab significantly accelerates the model update cycle. “We can even turn this around one day,” Cox stated, contrasting this with traditional “monolithic, sort of one year release cycles.” This agility allows enterprises to rapidly integrate new information and adapt their AI models to changing business needs.

Cox’s insights and IBM’s InstructLab point to a shift in enterprise AI adoption. The focus is moving from generic, off-the-shelf models to tailored solutions that reflect each company’s unique expertise. As this technology matures, the competitive edge may well belong to those who can most effectively turn their institutional knowledge into AI-powered insights. The next chapter of AI isn’t just about smarter machines — it’s about machines that understand your business as well as you do.



Source link

About The Author

Scroll to Top