Generative AI | News, how-tos, features, reviews, and videos
RAG is a pragmatic and effective approach to using large language models in the enterprise. Learn how it works, why we need it, and how to implement it with OpenAI and LangChain.
We can dramatically increase the accuracy of a large language model by providing it with context from custom data sources. LangChain makes this integration easy.
Get a hands-on introduction to generative AI with these Python-based coding projects using OpenAI, LangChain, Matplotlib, SQLAlchemy, Gradio, Streamlit, and more.
The advantages of LangChain are clean and simple code and the ability to swap models with minimal changes. Let’s try LangChain with the PaLM 2 large language model.
LangChain is one of the hottest platforms for working with LLMs and generative AI—but it's typically only for Python and JavaScript users. Here's how R users can get around that.
Learn how to use Google Cloud Vertex AI and the PaLM 2 large language model to create text embeddings and search text ranked by semantic similarity.
Using the PaLM 2 large language model available in Google Cloud Vertex AI, you can create a chatbot in just a few lines of code. These are the steps.
Generative AI output isn’t always reliable, but here’s how to improve your code and queries created by the technology behind ChatGPT, and prevent sending out sensitive data.
Use React and the Stable Diffusion API to build a reactive AI application that generates images from user-submitted text.
Digital twins have enormous potential for bridging IT and OT, but developing them is not cheap. Here's how to ensure a successful rollout.