Organizations seeking to integrate digital-first practices into their engineering processes often rapidly discover a common roadblock: critical dependencies on the individual expertise of specific employees embedded in legacy workflows. Discovering this issue has prompted some to ask what role might generative technologies play in supporting digital engineering transformation efforts: “Can they help us reduce reliance […]
Read More
We’ve all seen it—a couple on a date, politicians, friends, or colleagues talking right past each other, trapped in a moment of profound misunderstanding over the meaning of a single word. For me, that moment came when my partner, a New Yorker through and through, told me, a Midwesterner, to take “the next left” while […]
Read More
Thus far in our ongoing series on artificial intelligence we’ve spoken in depth on questions of trust, human perception, and limitations of generative models. We have focused specifically on large language models (LLMs), due in part to their recent successes and media attention. We’ve explored questions of data, testing, and broad model implications. However, LLMs […]
Read More
In my ongoing efforts to deeply engage with research into large language models, I have continually wrestled with and confronted a persistent sense of dissatisfaction. Unfortunately, the source of my dissatisfaction has also been frustratingly difficult to articulate and to pin down. At times, I have wondered if I’m not dissatisfied but rather uneasy because […]
Read More
A broken 12-hour clock is correct for its assigned job – telling the time – twice a day. It is correct for an alternative job – being a paperweight – almost all the time. Then again, even as a paperweight, a broken clock might perform terribly if we use it to hold paper to a […]
Read More
In all my years as a researcher, I’ve never had so many friends and family members asking me about AI – chatbots, in particular. Even people that I would have described as fairly disinterested in tech in general have shared with me their experiences interacting with ChatGPT, or expressed that they are fearful and/or intrigued […]
Read More
The latest installment of our ongoing “AI and Trust” series comes in the form of a tech talk given by Galois Principal Scientist Shauna Sweet on March 6, 2023. In her presentation, Sweet helps us dig deeper to uncover the core ideas, concepts, and principles behind large language models (LLMs), tackling such central questions as: […]
Read More
“We assume that the neuron is the basic functional unit, but that might be wrong. It might be that thinking of the neuron as the basic functional unit of the brain is similar to thinking of the molecule as the basic functional unit of the car, and that is a horrendous mistake.” – John Searle […]
Read More
The public release of ChatGPT3 and DALL-E 2 radically changed our expectations for the near future of AI technologies. Given the demonstrated capability of large generative models (LGMs), the ways in which they immediately captured public imagination, and the level of publicized planned capital investment, we can anticipate rapid integration of these models into current […]
Read More