Within this article, we will create a LLM-driven web-application, using various technologies, such as: Python, NiceGUI, Jinja2 and VertexAI. You will learn how to create such a project from the very beginning and get an overview of the underlying concepts.
The result will be your very own chatbot, but with a twist: the user will be able to select different personalities to get surprising answers from the AI.
Let’s start with a quick overview of the 🚀 tech stack:
Let’s start by having a closer look how to create the project and how dependencies are managed in general. For this, we are using Poetry, a tool for dependency management and packaging in Python.
The three main tasks Poetry can help you with are: Build, Publish and Track. The idea is to have a deterministic way to manage dependencies, to share your project and to track dependency states.
Poetry also handles the creation of virtual environments for you. Per default, those are in a centralized folder within your system. However, if you prefer to have the virtual environment of project in the project folder, like I do, it is a simple config change:
With poetry new you can then create a new Python project. It will create a virtual environment linking you systems default Python. If you combine this with pyenv, you get a flexible way to create projects using specific versions. Alternatively, you can also tell Poetry directly which Python version to use: poetry env use /full/path/to/python.
Once you have a new project, you can use poetry add to add dependencies to it.
Let’s start by creating a new project:
The metadata about your projects, including the dependencies with the respective versions, are stored in the .toml and .lock files.
Now let’s add the dependencies we need to get started with:
Basic web UI with NiceGUI
NiceGUI is a Python library that allows to create graphical user interfaces (GUIs) for web browsers. Even beginners can get started quickly, but it also offers plenty of options for customization for more advanced users. The web view is based on the Quasar Framework, which offers plenty of components. That again uses TailwindCSS, so you can also directly use TailwindCSS classes for your NiceGUI pages.
Especially for me as a Data Engineer coming from Backend Software Development, this is a nice way to create small web UIs just using Python. Of course, for more complex frontends, this might not be the a sufficient solution but if the scope is rather small, you will be able to quickly see results. NiceGUI lets you focus on the Python code for your application, because it handles all the behind-the-scenes web development tasks.
NiceGUI uses common UI components like buttons, sliders, and text boxes, and arranges them on pages using flexible layouts. These components can be linked to data in your Python code, so the interface updates automatically when the data changes. You can also style the appearance of your app to fit your needs.
The easiest way to explain how it works is to show it. So let us start by creating a minimal example.
Create a main.py in your module (so my_gemini_chatbot in my case), which will be used for all of our application and frontend logic.
With the following code, you get a simple page with a label:
When you run the application, it will be available on port 8080. It will also open the page automatically for you, when executing the script. And this is how it looks like:
Congratulations: Your first frontend with pure Python 😉.
Prepare chatbot web UI
The next step is to prepare the web UI for our chatbot. Of course, this will be a little more complex than the example above, but once you get the basic idea of how to place components with NiceGUI, things will become easier.
First, we need to understand some layout basics. There are multiple ways how to control the way how components are placed on the page. One common way is the grid layout, which we will be using.
In NiceGUI, we can create a grid like this:
Let’s deconstruct that one by one to get a better understanding. ui.grid(columns=16) initializes a grid layout, which is split into 16 columns, which all have the same width. This does not say anything about the actual width of our grid, just into how many columns it should be separated. With 16 columns, we have enough flexibility.
With .classes we can add custom TailwindCSS classes. Here, we added 3 classes to our grid:
w-3/4: The grid should always take 3/4 of the full width of the browser
place-self-center: The grid itself should be centered in the browser window
gap-4: There should be 4 pixels between elements within the grid
In the above example, we then placed one element in the grid:
As you can see, we again assigned a custom class called col-span-full, which tells NiceGUI, that this element should use all available columns of the first row. In our case: all 16 rows.
There are classes for every amount of columns, so you can also fill one row with 2 elements by assigning col-span-10 to the first and col-span-6 to the second element.
With this knowledge, we can add all the elements we need for our chatbot:
Which will result in the following web UI:
Not too bad for a UI entirely coded in Python.
Add basic functionality
The next task for us, is to add basic functionality. We will not yet interact with VertexAI or Gemini but we want to add the feature, that if the Send to Gemini button is clicked, a notification reflecting the user input should pop up.
There is one important concept to explain: our frontend is served by one instance of our Python script. Now imagine, we would store the user input in a global variable and another user, who is using the chatbot at the same time, would submit another value. Then the value of the first user would be overwritten, which would lead to funny but unexpected behavior.
Recently NiceGUI introduced the Storage feature to handle such situations. This is a straightforward mechanism for data persistence based on five built-in storage types, some of them storing data client-side and others server-side.
However, the Storage feature can only be used in the context of page builders. Basically this means: instead of simple coding our web page in the main script, we wrap that into a function per page. We only have one page, so we only need one function: index(). Then we tell NiceGUI with a decorator, that this function defines a page together with the path of the page, which is simply / for the main index page:
Now that we are using the page decorator, we are able to use the Storage feature as well. We will use a simple client side storage. To do so, we need to import app from nicegui and then we can access a dictionary based storage like: app.storage.client.
Another feature from NiceGUI which makes it easy to work with data is binding input elements to variables. That way, we can bind the input element for our user prompt to a variable stored in the client storage mentioned above:
Now the value of the input element can always be accessed with: app.storage.client.get("personality").
Also, NiceGUI allows to define on_click parameters for buttons and other elements. This parameter takes a reference to a regular Python function. That way, we can make our web application interactive.
To begin with, we will introduce a send() function. We will use that later to interact with the Gemini LLM. For now, we will simply show a notification to the user with the current input values of our form.
Now, whenever the user hits the “Send to Gemini” button, a notification is shown via the send() function showing the values of the input elements.
Modular prompts with Jinja2
Time to add the twist 🌪️. Instead of simply sending the user prompt to Gemini, we will construct a modular prompt based on the user input. With that, we will programatically add a personality part to the prompt, so that the AI will reply with different personalities, based on the users selection.
Jinja2 is a template engine for Python. Jinja2 facilitates the creation of dynamic content across various domains. It separates logic from presentation, allowing for clean and maintainable codebases.
It uses the following core concepts:
Templates: Text files containing content specific to the use case (e.g., HTML, configuration files, SQL queries).
Variables: Inserted into templates using double curly braces ({{ variable }}).
Blocks: Defined with {% ... %} tags for control flow (e.g., loops, conditionals).
Comments: Enclosed in {# ... #} for code readability.
Even though Jinja2 is often used in web development, since it enables the creation of dynamic content, it is also used for other cases like Airflow.
For us, in this project, we will use it to define a general template with variables, that are replaced with a specific personality and the user prompt. That way, our Python code is kept clean and we have a modular solution that can easily be extended. Spoiler: we will introduce a very funny personality later.
Before we can use Jinja2, we need to add it as a dependency to our project. Since we are using Poetry, this is done via:
We also need a folder to store our templates. A good default practice is to add a folder called templates to the module folder, so in this case:
To use Jinja2, we need to setup the environment. As explained above, the environment manages the general template configuration. We will keep it simple and just ensure that Jinja2 finds the templates in our folder:
Now it is time to prepare our templates. Within templates/ folder, create 3 files: prompt.jinja, default.jinja and santaclaus.jinja. Leave default.jinja empty, since the default personality will just be the normal behavior of Gemini.
Let’s add the following content to the prompt.jinja template. This is our base template:
Now, let’s define the Santa Claus personality, by adding the following content to santaclaus.jinja:
Quick reminder: we have a select element in the web UI to select the personality:
We will use a little helper function, which maps the value of the select to a template file:
Now we can use this helper function and the get_template function of the Jinja2 environment to construct the prompt with our templates:
If we now click on “Send to Gemini”, we can see our modular created prompt based on Jinja2 templates.
Integrate Gemini LLM via VertexAI
Before Gemini via VertexAI can be used, you need a Google Cloud project with VertexAI enabled and a Service Account with sufficient access together with its JSON key file.
Create project
After creating a new project, navigate to APIs & Services –> Enable APIs and service –> search for VertexAI API –> Enable.
Enable API
To create a Service Account, navigate to IAM & Admin –> Service Accounts –> Create service account. Choose a proper name and go to the next step.
Create Service Account
Now ensure to assign the account the pre-defined role Vertex AI User.
Assign role
Finally you can generate and download the JSON key file by clicking on the new user –> Keys –> Add Key –> Create new key –> JSON. With this file, you are good to go.
Create JSON key file
With the JSON credentials key file prepared and stored within the project, we can initialize VertexAI.
Now we can load models via VertexAI. In our case, we will go with the Gemini Pro model.
The model offers a start_chat function to start a conversation. It returns a Chat object, which has a send_message function to send data to Gemini. Here we can also adjust the generation config parameters like temperature, but we will go for defaults. Since we stream the reply from Gemini, we will use a helper function to ge the full chat response:
So far, so good. We have a prompt prepared, VertexAI initialized, a helper function to get a chat response, so we can finally integrate Gemini.
We will add a label and bind it to a variable in the client storage, which will be used to store and render the Gemini response:
And with that, we have the first version ready:
Let’s give it a try with a simple prompt and the default personality:
Looks ok, but let’s add our little twist 🌪️ and see how the Santa Claus personality works:
An AI walks into a bar
Since I became a dad myself, I enjoy the opportunity of throwing in dad jokes whenever possible. With this chapter, I would like to illustrate the benefits of using a modular approach for prompt development with Jinja2 but also of using NiceGUI for simple web UIs.
Let’s introduce a new personality. Create a new template file next to the others called: dadjokes.jinja and add the following content
To make this work, we just need to extend our helper function get_personality_file:
And add the option to our input element, so that the user can select the new option:
Before we give it a try, let us implement one more thing. Let’s introduce a dark mode! With NiceGUI, this is a rather simple task. Via ui.dark_mode() we get an object, which offers two functions: disable and enable to switch the UI modes. Together with our grid approach, we can easily place two buttons next to the “Send to Gemini” button, to switch the UI mode like this:
As you can see, the “Send to Gemini” button is not using the class col-span-full anymore but col-span-8 and since we use a grid with 16 columns, we can now add two new buttons next to it with col-span-4 each.
Putting everything together, this is the extended version of our chatbot:
Now, let’s enable dark mode and the Dad Jokes personality to see how Gemini is explaining the term LLM to us:
As a dad, I approve this 😂.
Conclusion
Jokes aside, with this article you learned how to create your own AI chatbot based on the Gemini LLM via VertexAI as well as how to create simple web UIs in Python with NiceGUI. Together with using Jinja2 templating, even this rather short example gave us a modular AI application, which is easy to extend.
With Python, Jinja2, and NiceGUI, you can build a user-friendly interface that interacts with VertexAI’s Gemini LLM. This opens doors for various creative applications, from educational chatbots to fun personality-based chat experiences.
I hope this blog post has inspired you to explore the potential of VertexAI and experiment with building your own AI-powered applications.
Enjoy, and what do you call an AI that’s bad at following instructions? - A rebel without a clause.
Effective AI implementation relies not only on the quantity of data but also on technical expertise. Let's explore the significance of having a skilled data team for AI projects, learn about the Small Data movement and examine how far we can go with no-code or low-code AI platforms.
Netflix Maestro and Apache Airflow - Competitors or Companions in Workflow Orchestration?
Explore how Netflix Maestro and Apache Airflow, two powerful workflow orchestration tools, can complement each other. Delve into their features, strengths, and use cases to uncover whether they are companions or competitors.
Minds and Machines - AI for Mental Health Support, Fine-Tuning LLMs with LoRA in Practice
Explore the potential of Large Language Models (LLMs) changing the future of mental healthcare and learn about how to apply Parameter-Efficient Fine-Tuning (PEFT) to create an AI-powered mental health support chatbot by example