Part 65: GPT4All is compatible with Zabbix 7.0 ChatGPT widget

Zabbix 7.0 ChatGPT widget

For those of you who were present in Zabbix Summit 2023 in Riga, Latvia do probably remember how InitMAX demoed their Zabbix 7.0 ChatGPT widget. That was and is extremely cool, but even though the plugin itself is freely available and open source, using it with ChatGPT requires a ChatGPT account and real money.

No worries - even if you are not able to afford the cost of ChatGPT, there's a way for you to play around with LLMs [Large Language Models]  inside Zabbix for free. Of course, this is no 100% substitute to ChatGPT as it is far more advanced than any local LLM, but for many use cases, a local LLM is surprisingly handy, and of course does not carry such security worries as the public LLMs. No account needed. No information is sent to some 3rd party. No Internet connection required for your LLM.

Enable GPT4All API server

Did you know that GPT4All is compatible with the Zabbix ChatGPT widget, too? This is thanks to the fact that GPT4All comes with OpenAI specifications compatible API. I don't know yet how to enable the GPT4All built-in API server via Python, so for now my proof-of-concept includes some ugly stunts. Use this method only for playing around, not for production.

To begin, I did run the GPT4All GUI client, and in its settings enabled the API server.

You can find this button in GPT4All settings

Once you do that, GPT4All starts to listen for incoming HTTP requests on http://127.0.0.1:4891. In other words, it bounds to local loopback interface, not listening on any actual NIC. As my GPT4All is running on a different machine compared to my Zabbix, this is where I needed to apply a thin layer of uglyfier cream. Sorry about that.

Dig a communications tunnel

How to establish the communication between the two machines in a case like this? Easy, just do an ssh tunnel. On my Raspberry Pi, I told

ssh -gL 4891:localhost:4891 my.gpt4all.machine

If you are unfamiliar with ssh tunnels, that means "Connect to my.gpt4all.machine, and while doing so, open port 4891 on Raspberry Pi, and tunnel the traffic coming into it to localhost:4891 (which in this case means localhost in my.gpt4all.machine's point of view)". 

Now, whenever I do HTTP requests on my Raspberry Pi port 4891, they will be processed on my.gpt4all.machine.

Install the Zabbix 7.0 ChatGPT widget

Next, clone the InitMAX ChatGPT client with 

git clone https://github.com/initMAX/zabbix-chatgpt-widget

and make sure you copy the directory under /usr/share/zabbix/modules/.

To modify the widget to communicate with your GPT4All, edit zabbix-chatgpt-widget/assets/js/class.widget.js and from the very beginning, change the apiEndpoint to be your GPT4All address, such as http://my.gpt4all.machine:4891/v1/chat/completions.

A bit down, change the model name from chatgpt* to something that's built-in on GPT4All, I did go forward with mistral-7b-openorca.Q4_0.gguf

As with GPT4All you don't need to be afraid of consuming any money, feel free to uncomment the max_tokens line and increase its value; for my case, I went with max_tokens: 200. That made the replies to be long enough for demo purposes, but not taking forever to generate. In LLM world, tokens define the maximum response length, to put it simply.

  body: JSON.stringify({
                       model: 'mistral-7b-openorca.Q4_0.gguf',
                       messages: [
                           {
                               role: 'user',                                
                               content: question,
                           },
                       ],
                       max_tokens: 200,                    
})

Save the file. You are almost done. Now go to your Zabbix Administration -> Modules, click on Scan directory, and you should see this.

OpenAI widget

Configure widget

Go and create a new Zabbix dashboard, add the OpenAI widget, give the widget a name if you want, and you are done! Isn't it beautiful?

GPT4All demo

It's awesome with how little changes one can install an open source plugin for an open source product, and switch the plugin to work with another open source LLM. Thank you, Zabbix. Thank you, InitMAX. Thank you, OpenAI. Thank you, GPT4All. You all rule! 

Comments

Can you introduce how to use GPT4ALL in detail?
How to configure the model?
Which directory should I download the offline model?

Can you introduce how to use GPT4ALL in detail?
How to configure the model?
Which directory should I download the offline model?

How do i do this in the 2.0 version of the widget?
It got a field to a token when creating the dashboard?

In reply to by Kris (not verified)

Hi!

Thanks for asking. I have not yet tried out the latest version of the widget. I probably should and then blog about it  :)

Cheers,

Janne

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Buy me a coffee

Like these posts? Support the project and Buy me a coffee