Saturday, March 29, 2025

Getting Started with Google Gemini API


Google's Gemini API provides powerful generative AI capabilities that you can integrate into your applications for various use cases such as chatbots, content generation, and more. In this blog post, we'll explore how to use the Google Gemini API with Python, including a simple example and a Streamlit-based chatbot implementation.



Prerequisites

Before you start, ensure you have the following:

  • Python installed on your system

  • An API key for Google Gemini (stored in an environment variable or a secrets file)

  • Required Python libraries installed (google-generativeai and streamlit for the Streamlit example)

Installing Dependencies

You can install the required package using:

pip install google-generativeai streamlit

Simple Example: Making a Basic API Call

The following Python script demonstrates how to interact with the Gemini API using a simple function to retrieve the capital of India.

import google.generativeai as genai
import os

API_KEY = os.getenv("GEMINI_API_KEY")
genai.configure(api_key=API_KEY)

model = genai.GenerativeModel('gemini-1.5-flash-8b-exp-0924')

response = model.generate_content("tell the capital of India.")

print(response.text)

Explanation:

  1. Import the required module.

  2. Retrieve the API key from environment variables.

  3. Configure the Gemini API using genai.configure.

  4. Initialize the generative model (gemini-1.5-flash-8b-exp-0924).

  5. Call generate_content with a simple prompt.

  6. Print the response received from the API.

Creating a Chatbot with Streamlit

For a more interactive experience, let's build a chatbot using Streamlit that integrates with the Gemini API.

import google.generativeai as genai
import streamlit as st
import os

api_key = st.secrets["GEMINI_API_KEY"]

genai.configure(api_key=api_key)

model = genai.GenerativeModel('gemini-1.5-flash-8b-exp-0924')

# Streamlit UI
st.title("Gemini API Chatbot")

if "messages" not in st.session_state:
st.session_state.messages = []

for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])

if prompt := st.chat_input("Ask your question - Rajamanickam.Com Demo"):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)

with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
try:
for chunk in model.generate_content(prompt, stream=False):
full_response += (chunk.text or "")
message_placeholder.markdown(full_response + "▌") #Animated typing effect.
message_placeholder.markdown(full_response)
except Exception as e:
full_response = f"An error occurred: {e}"
message_placeholder.markdown(full_response)

st.session_state.messages.append({"role": "assistant", "content": full_response})
Explanation:
  1. Import necessary libraries (google-generativeai, streamlit, and os).

  2. Load the API key from Streamlit secrets.

  3. Configure the Gemini API and initialize the model.

  4. Set up a basic Streamlit UI with a title.

  5. Maintain a chat history in st.session_state.

  6. Display previous messages in a chat format.

  7. Handle user input and generate AI responses.

  8. Use st.chat_message for a clean chat-style interface.

Running the Chatbot

To run the Streamlit chatbot, save the script as app.py and execute:

streamlit run app.py

This will open a browser window where you can interact with the chatbot.

This Streamlit sample code currently maintains only the chat history for display purposes but does not preserve context in the conversation when sending queries to the Gemini API. Each user input is treated as an independent query, meaning the model does not remember previous interactions.

To maintain context, you can modify the code to concatenate previous messages into a single prompt before sending them to the API. Here's an improved approach:

Modify This Line in the Code:


for chunk in model.generate_content(prompt, stream=False):

Replace it with:


context = "\n".join([msg["content"] for msg in st.session_state.messages]) full_prompt = f"{context}\nUser: {prompt}\nAssistant:" for chunk in model.generate_content(full_prompt, stream=False):

Conclusion

Google Gemini API provides an easy way to integrate generative AI into your applications. Whether you need a simple API call for content generation or an interactive chatbot, Gemini's powerful models can help. Try out these examples and explore more advanced features to enhance your AI applications.

AI Course |  Bundle Offer (including AI/RAG ebook)  | AI coaching 

eBooks bundle Offer India | Earn 50% affiliate Commission

No comments:

Search This Blog