Want smart insight into your inbox? Enterprise AI, only what matters to data and security leaders, sign up for our weekly newspapers. Subscribe now
Deciding on the AI model is more as a technical decision and is a strategic. But open, closed or selecting hybrid models is all traded in.
Speaking on the VB transform of this year, model architecture experts of General Motors, Zoom and IBM discussed how their companies and customers consider the selection of AI models.
Barak Turowski, who became the first chief AI officer of GM in March, said that every new model is very noisy with release and every time the leaderboard changes. Long before the leaderboard argued a mainstream, Turowski helped launch the first major language model (LLM) and recalled the methods of open-and-finding AI model weight and training data, leading to leading successes.
“It was clearly one of the greatest successes that helped Openai and others start launching,” Turovsky said. “So this is really a funny anecdote: Open-Sures really helped to create something that has stopped and is probably open now.”
The factors of decisions vary and include cost, performance, confidence and security. Turovsky stated that enterprises sometimes prefer a mixed strategy – using an open model for internal use and a closed model for production and customer face or vice versa.
IBM’s AI Strategy
VP Armand Ruiz of IBM of AI platform said that IBM initially launched its platform with its own LLM, but then felt it would not be enough – especially more powerful models arrived on the market. The company then expanded to offer integration with platforms such as hugging so that customers could choose any open-source model. (The company recently introduced a new model gateway, which gives the APIs to the enterprises to switch between LLM.)
More enterprises are choosing to buy more models from many vendors. When Andresen Horovitz Survey 100 CIOs, 37% of the respondents said they were using 5 or more models. Last year, only 29% were using the same amount.
Ruiz said that choice is important, but sometimes too much option creates confusion. To help customers with their approach, IBM does not worry too much which LLM they are using during concept or proof of pilot phase; The main goal is viability. Only later they begin to see if a model has to be disturbed or optimized one based on the needs of the customer.
“First we try to simplify all that analysis paralysis with all those options and focus on the use of use,” Ruiz said. “Then we find out what is the best way for production.”
How Zoom reaches AI
Zoom customers can choose between two configurations for their AI partner, Zoom CTO XUedong Huang said. One involves the company’s fedelation to its LLM with other large foundation models. Another configuration allows customers to worry about using a lot of models, just to use the zoom model. (The company recently participated with Google Cloud to adopt an agent-to-agent protocol for AI partner for Enterprise Workflow.)
Huang said that the company created its small language model (SLM) without using customer data. In 2 billion parameters, LLM is actually very small, but it can still perform better than other industry-specific models. SLM works best on complex tasks when working with a large model.
“This is actually the power of a hybrid approach,” said Huang. “Our philosophy is very straightforward. Our company is very fond of Mickey Mouse and Elephant Dancing. The small model will do a very specific task. We are not saying that a small model will be quite good … Mickey Mouse and Elephant will work together as a team.”