In the world of artificial intelligence software, the potential danger of systems not always doing what their developers intend has attracted the attention of some tech giants.MicrosoftJoined forces with former Google CEO Eric Schmidt to support a company called Synth LabsStartups, is dedicated to solving this alignment problem.
Synth Labs, whose founders include former Illumina Inc. CEO Francis deSouza and two founders who worked at nonprofit AI research lab EleutherAI, Louis Castricato and Nathan Lile, has raised initial funding from Microsoft Corp.’s venture capital fund M12 and Eric Schmidt’s First Spark Ventures, with a tone of transparency and collaboration.
The AI alignment problem primarily involves applications built on large language models, such as chatbots, which are often trained on large amounts of internet data. Solving this problem is complicated by differences in people's ethics and values, as well as different views on what AI should and should not do. Synth Labs' products are designed to help guide and customize large language models, especially those that are open source.
The company was originally launched as a project within EleutherAI, a joint effort of the three founders (Castelcato, Lyle, and Biderman). Over the past few months, Synth Labs has developed tools that enable the evaluation of large language models on complex topics. Their goal is to promote easy-to-use tools that enable people to automatically evaluate and align AI models.
A research paper co-authored by Castelcato, Lehr, and Biderman shows off Synth Labs’ approach: A dataset was created using prompt responses generated using OpenAI’s GPT-4 and Stability AI’s Stable Beluga2AI models. This dataset was then used as part of an automated process to guide a chatbot to avoid discussing one topic and instead talk about another.
“The way we designed these early tools is primarily to give you the opportunity to decide what alignment means for your business or your personal preferences,” Lyle said.
The emergence of Synth Labs enables more companies to ensure that their AI systems act as expected in a transparent and collaborative manner, which is particularly important in the current context of AI development.