Feature: Leveraging LLMs in Liminal
How we are trying making it easier for scientists to leverage AI in everyday research
This is a Feature blog, where we write about the what and the why’s of features we’ve incorporated into Liminal.
About a month ago, we purchased some API keys from one of the latest large language models (LLMs) and plugged them into our app. We set out to simply make the transition from asking a chatbot (like ChatGPT) to adding output to your electronic notebook easier. We did this by putting the chatbot in the same window as the electronic notebook and putting in a button to copy the output straight into your notebook.
Our next step was to provide a first lesson to our LLM: feed all the papers the lab has ever written into the model and generate little bits of intel, which we can then suggest back to users when they write a new notebook entry. Since the intel comes from papers written by the actual lab, these little bits of intel are well-trusted.
Next, we set out to ask our labs about lesser known pieces of knowledge about the lab. For example, Fridge A is for clean samples, whereas Fridge B is for dirty samples. Or, we keep our genomics data on /depot/negishi/johnsonlab. We actively feed this into our model, catering personalized responses to our users.
Hey assistant, I want to do PCR.
“Okay, here’s a protocol … make sure you store you samples in Fridge B”.
But this is just the start. I mean, if ChatGPT knows everything about the internet, why would anyone use Liminal’s integration of the model? Because, Liminal is the vessel that builds on the latest AI models and teaches it the knowledge about your lab that only you know. You teach it while you are already doing your work. As we expand, we plan to iteratively teach your lab assistant based on many other things. When a notebook entry is complete, the model will learn from it and know a bit more about you, becoming just a little bit more helpful.
Our plans for our customers are this: to help build your own personal assistant that you can take with you for the duration of your career. We aren’t creating these models - that’s what much, much larger companies are doing. Our goal is to have you start teaching these models about you the things it can’t find out online. We want to give you a head start to having your own scientific, verifiable, and custom assistant to take with your for your career. This is the advantage we are striving to empower our scientists with.
Sincerely,
Dane