AI Workflows with Open Source Models and AMD GPU

Tags:
  • Artificial Intelligence
First published:
06-Jul-2025
Last updated:
31-Aug-2025
Contents

Why do I even have an AMD GPU for AI?

Back then when I was looking to build a new PC, I was looking for a GPU that could handle gaming well. AMD GPUs are always known for value for money for this. Little did I know that I'll soon be tinkering with AI.

What intrigued me?

Curiosity, I had my eye on cutting edge AI tools evolving in the tech. I wanted to have some action with them. I was particularly interested in open source models that could run on my local machine. There was all kinds of things to work on like image generation, chat models etc.

What did I do?

I created an image annotation workflow using:

  • Google Gemma 3 - AI Model capable of image analysis.
  • n8n - Workflow automation platform.
  • LM Studio - Local AI toolkit to manage AI models.

All of the AI are open source and free to use.

With the AI workflow I turned this

Annotation input folder

into this

Annotation output sheet

The Setup

Basic requirements

  • LMStudio with Gemma 3 model downloaded/installed or any other model with image input capabilities, you can also run it on docker but it should be an OpenAI-compatible server I'd say go with LMStudio if you're on AMD GPU as well this already has AMD compatible engines and is easy to set up.
  • n8n installed on your machine, and basic knowledge about it you can learn it from n8n

Google Credentials

For the Google sheet and Drive access, you'd need to obtain OAuth 2.0 credentials from API Console (it's free).

n8n and Chat model integration

Now comes the part where you'd link up your local AI model with n8n, they don't directly provide a way to do this so we'll be using their generic Open AI credential to connect to the local model.

  1. Start up LM Studio and in the status bar below switch to the Developer mode.
LM Studio developer mode
  1. After this you'll see the left sidebar popped up and select the Developer tab.
  2. Then you'd have to enable the server and make sure it's serving on the local network.
LM Studio dev server

You will notice the server has started on your local IP with Open AI compatible API, note the port, we will need it in the next step.

LM Studio server IP address
  1. Now in n8n, go to Credentials and create a new credential of type OpenAI and fill in the Base URL like below:
  http://{YOUR_IP_V4}:{PORT}/v1
Creating Open AI credentials

In my case my local IP was 192.168.29.253 plus the port in the last step.

The Workflow

Download the workflow and import it in your n8n workflow with the Import from File option. You will have to set up the Google credentials in the nodes that require it.

Select the appropriate model and the Open AI credentials you created in the Analyze image node.

Configuring AI node in n8n

In the Google Drive node, select the folder where your images are stored and in the Google Sheets nodes select the sheet where you want to store the annotations.

Configuring Google Drive node in n8n

In my case I created a folder named Images in my Google Drive and put some images in it.

Configuring Google Sheet node in n8n

For the Append Row in Sheet node in the workflow, select appropriate google document and sheet name.

Execute the workflow

Hit the Execute Workflow button and you should see the workflow running step by step.

n8n workflow execution

After it has gone through all the images, you should see the annotations in your Google Sheet as expected.

Voila! You have successfully set up an AI powered image annotation workflow using AMD GPU.

Bonus: You can also add more nodes to parallelize the annotation in batches like you would do so in a python/node.js coded script.

I know we can do all of this with a simple programmed script but I wanted to explore n8n and AI capabilities of AMD GPUs. Also, this is a no-code solution, with n8n we can create multiple workflows and link them up with each other as sub-workflows as well. I find n8n very powerful and flexible for the non-tech people in your team.


Responses

If you have thoughts or feelings about this post, send them my way via your preferred communication channel.

Thanks for visiting!