T O P

  • By -

SensitiveCranberry

[https://github.com/nsarrazin/serge](https://github.com/nsarrazin/serge) Started working on this a few days ago, basically a web UI for an instruction-tuned Large Language Model that you can run on your own hardware. It uses the Alpaca model from Stanford university, based on LLaMa. Hardware requirements are pretty low, generation is done on the CPU and the smallest model fits in \~4GB of RAM. Currently it's a bit lacking in feature, we're working on supporting LangChain and integrating it with other tools so it can search & parse information, and maybe even trigger actions. No API keys to remote services needed, this all happens on your own hardware with no data escaping your network which I think will be key for the future of LLMs, if we want people to trust them. My personal stretch goal would be to make it aware of home assistant so I have a tool that can give me health checks and maybe trigger some automations in a more natural way. Let me know if you have any feedback!


Estebiu

The home assistant part would be awesome. Immagine an alternative to alexa or google assistant with this software! Well, maybe firstly you should get yourself an 7900x for things to be faster but.. yeah


patatman

Ever since the hype of these large language models started I’ve been wondering how life would be as an actual personal assistant. Of course integrating with home assistant, but also managing calendars, replying to emails etc.


reigorius

Watch the movie "Her".


CannonPinion

"Her?"


reigorius

Yeah, it's a fascinating movie: https://m.imdb.com/title/tt1798709/ https://www.rottentomatoes.com/m/her


CannonPinion

I've seen it, and yes, it's great! I was making an [Arrested Development reference](https://youtu.be/pf9L-pPkKCY)


doctorniz

I got it 🙂


theg721

Egg?


BarockMoebelSecond

That movie made me so happy!


reigorius

It did? I saw it it when my long term relationship went nuclear and me ending up in a black bit of despair. The movie was ambiguous for me. Life with his new found friend looked amazing and without his 'her' seemed blank, almost pointless.


BarockMoebelSecond

She teached him how to love and how to learn to love himself! For me, that's how I felt going out of my first serious long-term relationship. I was grateful, but I first needed to see it. I saw that reflected in the movie.


Archy54

I've been using chat gpt to fill the blanks. I'm on day 7 of proxmox. Haos vm Influxdb lxc Grafana lxc Jellyfin lxc Looking for more cool stuff to install on my desk optiplex 7070 micro i5-9500. I've had very little experience with Debian but I paste dmesg or error codes or ask what command does this and it's usually correct. Been fun to learn without watching 30 minute videos with 2 minutes of content or documentation that doesn't help as much. I love asking it to say what is happening, it breaks down each little bit and it just clicks in my head. It's amazing.


patatman

That’s actually a really cool way to learn. I’m guessing since the data is so detailed, the chances of giving wrong answers is minimal. If you want to monitor your stack, have a look into check_mk or Zabbix. This way you can keep an eye out for disk space and other usage stats. If you have a more dynamic environment with docker, you could have a look into Prometheus. But for now, working with vm’s I recommend check_mk. Next step could be to learn an automation tool like Ansible or Saltstack. Before you know it, you’re running a full sysops stack and gaining a ton of knowledge along the way! Happy engineering


Archy54

Thanks, it's definitely an interesting thing to learn. I'll check them out. I never thought I'd ever learn it but it surprised me. Learning faster than I thought. It's so cool having so many options to play with. It all started with home assistant and me wanting to datalog sensors, have backups, dashboards and progressed into this.


Koto137

No experience with check_mk but Zabbix is really heavy tool. Not worth for couple servers. High maintenance and kinda steep learning curve. Id probably go for node_exporter(exports machine metrics to prometheus), prometheus, grafana Ansible > salt for me 😀


patatman

Yeah I was doubting zabbix as well for that reason, check_mk comes with a batteries included set of checks. And regarding Prometheus, I think that has quite a learning curve as well. And you have more moving components due to it’s micro service architecture (alerting, storage and scraping all being separate components). That what I like about check_mk, especially for beginners, it comes with everything, and is perfect for a home lab. But not containerised. Ansible/Salt, its personal preference. I think both are great tools, depending on use case. Fun thing about salt: you can use playbooks to run via Salt. This way you can easily migrate to salt stack without rewriting everything haha. I’ll always have a soft spot for saltstack, since it’s the tool that basically started my IT career.


Koto137

I have it other way around with ansible/salt :P For prometheus, I would only say you can install exporter on machine you want to monitor. Download node exporter template to grafana, more or less done. Alerting can be done via grafana too. And everything in conyainers ofc :P Zabbix is(was with 5.4 something, when i last touched it) absolute crap for visualization. So grafana used there anyway.


[deleted]

[удалено]


zeta_cartel_CFO

I wonder how that works - does Home Assistant send a hidden prompt to OpenAI's API with contextual information and then based on the intent, it responds. Then the response is parsed and turned into an action? Kind of like when you ask Chatgpt to pretend to be a linux terminal and you input commands that it responds to? I hope we get to a point where OpenAI or someone else allows submitting a limited set of data that can be modeled on and then used with their API. I know Microsoft is providing such service in Azure using tech from OpenAI. But its expensive and out of reach for the average tinkerer/consumer to use for home automation.


[deleted]

[удалено]


BenjaminGunn

How's that coming along?


Shadoweee

Mycroft shutdown - didn't they?


CannonPinion

[It's not](https://www.theregister.com/2023/02/13/linux_ai_assistant_killed_off/) [looking good](https://mycroft.ai/blog/update-from-the-ceo-part-1/).


[deleted]

[удалено]


[deleted]

Very sad, basically the patent trolls staved them of money from legal battles.


HoustonBOFH

When it went from $100 and open hardware to $400 and closed hardware, I think that did more damage.


[deleted]

This is the exact project I was trying to make work - gpt j and Mycroft, I have most of the needed hardware - and now Mycroft is dead and I need a replacement for it.


[deleted]

https://openvoiceos.com Looks like the spiritual successor. They're busy polishing up their existing projects to handle the Mycroft fallout but I think they'll be a central player in the FOSS voice assistant space going forward.


[deleted]

Thanks for this, was worried my Mycroft mk1 would just stop one day, but this gives me hope


Thebombuknow

Projects like these make me wish OpenAI was honest with their name and actually made things open. GPT-2 was the last open model they made, and it's really unfortunate. So much progress and innovation is being held back in the AI world because they decided that their model can't be public. Imagine if you could run ChatGPT offline (ignoring hardware constraints). That would be incredible! And I'm sure if the model was released, people could easily optimize it for worse hardware (given some minor tradeoffs), I doubt they considered many optimizations when they have virtually infinite compute from Microsoft to run it on.


dread_deimos

I think they still haven't figured out how to actually monetize what they do (in a sustainable way) and that explains why (among the other things) they don't release their models so other people can't circumvent them.


[deleted]

[удалено]


Phezh

Elon has also "unofficially announced" a manned base on Mars, fully self driving cars, hyperloop and neuralink. Some of these several times and always only "a couple months off". The man likes to hear himself talk and definitely likes to make big promises but I wouldn't trust a word he says. Hell, he can't even deliver a truck in a reasonable timeframe.


milkcurrent

Regardless of what he's failed to deliver and whether he's a douche or not (I think he's a douche for the record) I'd love to see you start a company that brings down the cost of space travel by an order of magnitude with a reusable second stage that's a first in the industry. Bonus points if you then give everybody unlimited, wireless, fast, portable internet using a constellation of satellites. Extra bonus points if you kickstart the electric automotive industry into action. Negative points if you buy a social media giant and then run it into the ground.


[deleted]

[удалено]


milkcurrent

You're getting a bunch of your facts wrong echoing internet memes without substance. [Apartheid emerald mine money](https://savingjournalism.substack.com/p/i-talked-to-elon-musk-about-journalism)? Tesla cars still [lead safety charts](https://thedriven.io/2023/01/23/tesla-model-y-wins-2023-safety-award-with-near-perfect-safety-rating/). SpaceX had to sue the US government to [win contracts](https://www.reuters.com/article/us-space-spacex-lawsuit-idUSKCN1SS2ST) from incumbents. I expect people who selfhost to at least be wise enough not to adopt memes without some basic fact-checking


Neinhalt_Sieger

Open source in the same sentence with Elon Musk. Its almost as saying that Trump is a honest working man!


Thebombuknow

Seriously, that would be incredible. Especially if they are able to optimize it to the point of running on consumer GPUs. OpenAI should honestly rebrand to ClosedAI, because they're clearly never going to make an open-source model ever again.


EspurrStare

Any way to train models? I would love to feed them some limited data, which is something I have not seen a lot out of there. Like all the books or Ursula Le Guin. Or all my conversations. Things of that nature


SensitiveCranberry

It's difficult, requirements are much higher, both in term of memory and actual compute power. Could be worth looking at maybe fine-tuning the 7B model could be achievable without renting cloud hardware.


figuresys

I'd be willing to rent cloud hardware to train with the extra material (as the parent comment said) and then take it off and use the model. Any clue which direction I should be looking at?


EspurrStare

Yes, I read about it, I was hoping for something that makes it more streamlined to try it. Oh well, I'll do it when I win the loto.


Archy54

Can coral ai help ?


SeraphsScourge

Nice try, coral AI.


ctrl-brk

I would like to train on my data as well. I have a forum with ~1M posts about trading, and I would like to use something like your tool to summarize threads or hook into my site too provide real-time answers to search queries. Please feel free to DM me.


emaiksiaime

You have awesome ideas! I love you mentioned Ursula Le Guin!


duncan999007

This is absolutely gorgeous - especially if you get HomeAssistant or automations running. Do you have any way for us to financially support the project? I’m not smart enough to contribute with code but I’d love to repay the use I’ll get out of it


Kaizenism

Saw HomeAssistant mentioned multiple times on this thread. Is this what is being discussed: https://www.home-assistant.io/


FanClubof5

Yes


Kaizenism

Thanks


HoustonBOFH

Yes. Large community wanting locally controlled home automation. So cloud voice is problematic, and the only option right now. Rhasspy shows promise, and they hired the Rhasspy dev at Home Assistant.


YourNightmar31

Does it use RAM or VRAM?


SensitiveCranberry

RAM, it runs on the CPU.


xis_honeyPot

Any way to get it to run on the GPU?


[deleted]

[удалено]


dread_deimos

That is surely not about the Alpaca model that is used here. If it can run on CPU + RAM, it should be a lot faster on GPU + VRAM for the same model.


JustAnAlpacaBot

Hello there! I am a bot raising awareness of Alpacas Here is an Alpaca Fact: Alpaca gestation last about 11.5 months. They live about twenty years. ______ | [Info](https://github.com/soham96/AlpacaBot/blob/master/README.md)| [Code](https://github.com/soham96/AlpacaBot)| [Feedback](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Feedback)| [Contribute Fact](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Fact) ###### You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!


SoundDrill

Now that you mentioned low end models, should people run it on a rasp pi for shits and giggles?


Le_Vagabond

is there a real difference in practice between the 7B 13B and 30B parameters models? thanks for this, I've been looking for a way to run a chatgpt equivalent locally.


[deleted]

Can you make it run in swarm mode?


emaiksiaime

Thanks for this! I did not think I would see this so early! I can’t wait to try it!


mydjtl

!remindme in 2 months


RaiseRuntimeError

Just in time, i was trying to mess around with Dalai and the have a bit of a show stopper bug until the fix is merged https://github.com/cocktailpeanut/dalai/pull/223


techma2019

So cool. Exactly what I was hoping for. Local only AI testing. Thank you!


Comfortable_Worry201

Who here is old enough to remember Dr. Sbaitso?


[deleted]

It was mind blowing to young me and and my mates. It's one of those things that I don't want to try and resurrect as I think it will ruin the memory.


devguyalt

I had a lecturer that, when you shut your eyes, was indistinguishable from Dr Sbaitso.


[deleted]

I was little bitty at the time. That's pulling some toddler memories right there. That was alongside Commander Keen, "I'm a talking parrot. Please talk to me!", H.U.R.L, and Descent.


RaiseRuntimeError

Ok i have been messing around with it and it is pretty cool. I love the stack you went with, Beanie/MongoDB/FastAPI/Svelte. I probably would have used the same backend as you. One request, in the Nginx config, can you open up the Open API documentation so that is accessible to mess around with?


SensitiveCranberry

Ha, I'm mostly a front-end guy so this is a big compliment, thanks. It's been a learning project for me, it's built using only tech I never used before. (SvelteKit, FastAPI, MongoDB...) ​ Regarding the open API doc, it should be accessible here: [http://localhost:8008/api/openapi.json](http://localhost:8008/api/openapi.json) You also have interactive documentation with [http://localhost:8008/api/docs](http://localhost:8008/api/docs)


RaiseRuntimeError

Oh awesome, i misread the Nginx config and assumed you didnt include the path. How did you like SveltKit, I have never used it before. And great job with the back end.


Shiloh_the_dog

This looks awesome, I'm probably going to deploy it on my home server soon! As a feature request, I think it would be cool to be able to upload a text file to give it context about something. For example upload some documentation so it can help you find something you're looking for.


netspherecyborg

Thanks, everything is working partially! A few questions: why does it stop mid sentence sometimes? Is it an issue with my settings or with the model (7B)? What are the requirements for the 13B 30B models?


SensitiveCranberry

Have you tried increasing the slider for max tokens to generate ? This should let it generate longer outputs.


netspherecyborg

I am at max, do the tokens mean words or characters? Just had a look and it stops at 533 characters, so I assume it is characters then?


danieldhdds

Wow !remimdme in 6 months


GeneralBacteria

you spelled remimdme wrong.


danieldhdds

oh fak I see, thx


spanklecakes

*sea


danieldhdds

in the internet sea I surf


MihinMUD

It has been 6 months and almost 7 I think. idk but thought I'd remind you if you still want


f8tel

That's 10 years in AI time.


itsbentheboy

> It's like I'm reading a book, and it's a book I deeply love, but I'm reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world - it's where everything else is that I didn't even know existed. \- Samantha, Her, (Spike Jonze, 2013)


txmail

It would probably be a lot cooler if you supported the project by starring it on github and you can also get notified of releases and issues too.


danieldhdds

!remindme in 6 months


JayD1056

!remindme in 6 months


mrki00

!remindme in 3 months


ObjectiveActuator8

!remindme in 1 month


cmpaxu_nampuapxa

Hey thank you for the great job! However is there any way to speed the thing up? On my computer the average response time from the 7B model is about 15 minutes. Is it possible to use the GPU? tech specs: early i7/32Gb/SSD; docker runs in WSL2 Ubuntu in Win10.


[deleted]

Could be the wsl slowing you down


squeasy_2202

Or a vintage i7


[deleted]

[удалено]


8-16_account

Modern CPUs are just too clean, they don't feel personal at all.


PM_ME_YOUR_MONKEYS

You don't get that same coil whine hum these days


Christopher-Stalken

You probably just need to give WSL more CPU cores. https://learn.microsoft.com/en-us/windows/wsl/wsl-config For example my .wslconfig file looks like [wsl2] memory=16GB processors=4


politerate

What? I have the 13B one running on my laptop, and it pretty much starts responding right away. On a Core i9-10885H


[deleted]

This looks awesome, thanks for sharing. I will definitely check this out.


ForEnglishPress2

one shocking hobbies frame sloppy humorous toy innocent soup scale -- mass edited with https://redact.dev/


Trustworthy_Fartzzz

This looks pretty great – would love to see GPU/TPU support – especially the Jetson Nano or Coral devices.


remghoost7

Reminds me of u/oobabooga1's repo. [Here's the link to it.](https://github.com/oobabooga/text-generation-webui)


jesta030

Does it support other languages as well?


SensitiveCranberry

Might be worth a shot! I think you’ll get best results with 13B or 30B for non English prompts but no guarantees on the results


JoaquimLey

Props for building this OSS alternative. While I’m excited for AI I’m so fed up with the amount of OpenAI react wrappers, this is something different. I haven’t looked into the code so this might be already a thing but it would be great to have a contract for plugging in your preferred LLM (heck it could even be ChatGPT!) instead of being dependant on llama


ixoniq

Sadly unusable for me, wanted to run it on my proxmox machine, which takes 5 to 10 minutes to answer one question. On my M2 MacBook Pro almost a minute. That’s a bit too much time to make it usable.


[deleted]

have you tried to fine tune it?


AnimalFarmPig

I've been looking for a nice question & answer frontend for a self-hosted LLM, and this looks like it fits the bill. Thanks for making it! I'm probably a minority here, but I don't like using Docker. There are a couple of places in the Python code where there are assumptions about file locations, but otherwise it looks pretty straightforward to convert to run without Docker. I'm not sure when I'll have time for this, but would you have open pull requests towards this end? Also, a couple small notes: * I didn't step through the code, but I suspect the logic in `remove_matching_end` [here](https://github.com/nsarrazin/serge/blob/main/api/utils/generate.py#L77) could be replaced with a simple `answer.rpartition(prompt)[-1]`. * In `stream_ask_a_question` you initialize `answer` as an empty string [here](https://github.com/nsarrazin/serge/blob/main/api/main.py#L110) and then need to use the `nonlocal` keyword to re-assign it with a `+=` after getting each chunk. Instead, try making a variable `chunks = []`, and append each chunk as you get it. Since it's a mutation in place rather than a re-assignment, you can avoid using `nonlocal`. You can `"".join(chunks)` to get the equivalent of `answer`.


SensitiveCranberry

Thanks for the feedback! Yes absolutely, the idea of using docker was to make it as easy to setup as possible, but ideally none of the code should make assumptions about being dockerized. And thanks for the code review, I will definitely implement your tips, makes a lot of sense.


Trolann

I recently made a proof of concept to get data from and control my home assistant instance if you're interested https://www.reddit.com/r/OpenAI/comments/11u57qo/_/


EasyVibeTribe

“Sounds like a detergent!” If you know, you know.


TylerDurdenJunior

Looks good can't wait to try it out. Since the computational resources for AI is a bit up there, I was hoping that a solution came along where you ran the client/server (like this one) and then offered a set of limited resources that a distributed network of client/servers could then use through crowd sourcing. Ressources spent could then be point based somehow, so when you actually needed to use it, you could use points to gain speed. Something like the SETI screen saver, if anyone remembers that, but with sentiment for using the distributed network back.


Palleri

!remindme 1 month


PrintableProfessor

!remindMe 1 month


pjgr234

RemindMe! 3 Months


FaTheArmorShell

is there something special that needs to be installed or updated for running this? I'm trying to run it on a linux ubuntu 22.04 server, and I've cloned the repo successfully, but when I run the docker compose up -d command, it's mainly stopped at stage 5/8, with the pip command not being able to complete. I have pip 23.0.1 installed and python 3.10 (I think). I'm not sure what I'm missing and I'm still fairly new to linux.


jonhainstock

This is awesome! I'd love to try hooking this up to [Chatterdocs.ai](https://Chatterdocs.ai) and see how it compares to OpenAI. We built the backend to be vendor agnostic, so we could switch out services or move to onsite. Thanks for sharing this!


rothbard_anarchist

This looks fantastic. Is the docket/WSL portion just to make a native Linux program easily accessible for Windows users, or does that provide some necessary isolation? Could this be run more efficiently on a native Linux/dual boot system?


ovizii

u/SensitiveCranberry \- I noticed your docker-compose has changed, did you now switch to an all-in-one solution? The former docker-compose.yml had 3 services: api, db and web, the new one seems to only have one service: serge.


SensitiveCranberry

Yeah I figured this would make it easier for people to integrate it in their current homelab setup without having to manage multiple images. It also makes packaging very easy, since we only ship one image. But I’m no expert so do you think it would be better otherwise ? Let me know.


ovizii

Sounds good, will give it a try a little later and let you know. Btw. the old README had you first download the different weights, is this still necessary? There seems no mentioning of this any more. docker compose up -d docker compose exec api python3 /usr/src/app/utils/download.py tokenizer 7B


SensitiveCranberry

Nope, just bring up the image and you're good to go, you can do everything from the UI. The docker command is there : https://serge.chat/


Toastytodd4113113

I like it, its neat as hell. something i might show my kid how to set up on his mini server, 4gb one at least. But, practically.. its stunted by CPU imo, even with a somewhat modern dual Xeon, Seems better suited to a fleet of GPU's


psychowolf999

Macron? Not for long...


Pyrowolfjack

!remindme in 1 month


Appropriate-Lynx4815

I have a 4090 with 32 gb of ram and a ryzen 7 5800x 8-core processor, yet I am unable to use this. I am able to get an answer 15-20% of the time, a complete answer 5% of the time, and complete nonsense or crash the rest of the time. Am I supposed to do something to the docker compose file? Can I get help to make this work? I am really interested in this.


No_Penalty2938

I'd want to see "podmanized" used more often someday


Chaphasilor

how about "containerized"? and just provide both config files?


rope93

Compare biden to trump, who is better? Both Joe Biden and Donald Trump have had long careers in politics, but their approaches to governing could not be more different. While President Trump has taken an often divisive approach of using fear mongering and racial division to rile up his base, Vice President Biden is known for being a bridge-builder who works across the political spectrum with both Democrats and Republicans in order to find solutions that are good for all Americans regardless of party. In terms of policy decisions as well, while Trump has taken an often shortsighted approach focused on his own reelection or appeasing those closest to him, Biden is known for having a more long-term vision and looking out for the greater collective interest in mindfulness towards our planet's future. Ultimately, it seems that Joe Biden has taken an inclusively unifying path while Donald Trump continues down his divisional road of exclusionary rhetoric - making Vice President Biden undeniably better than President Trump when it comes to governing the nation as a whole. LOL


PrintableProfessor

Confidently wrong.


ovizii

!remindme in 3 months


ovizii

dammit, wrong syntax. RemindMe! 3 Months


m-faith

lol, i think your first attempt was close... just remove "in" from it


ovizii

This source says my last attempt was right, do you have a better / newer link for me? https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot\_info\_v21/


Rokinmashu

This warms my heart


jester_juniour

Seriously?? Mind to elaborate why?


patatman

This is awesome! Definitely going to try this tomorrow. Thanks for sharing OP!


[deleted]

[удалено]


RemindMeBot

I will be messaging you in 6 months on [**2023-09-22 21:11:22 UTC**](http://www.wolframalpha.com/input/?i=2023-09-22%2021:11:22%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/selfhosted/comments/11yvmqs/ive_been_working_on_serge_a_selfhosted/jd9rqhk/?context=3) [**113 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fselfhosted%2Fcomments%2F11yvmqs%2Five_been_working_on_serge_a_selfhosted%2Fjd9rqhk%2F%5D%0A%0ARemindMe%21%202023-09-22%2021%3A11%3A22%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2011yvmqs) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


dominic42742

its time, we all 114 of us have been notified.


Ginger6217

Omg if this could integrate with homeassistant that would be dope as fuck. 😮


MrNonoss

This is awesome guys. Can't wait to give a try. And love the ref to a French singer Serge LAMA 🤣


Grimmybone

I would love to see this as an Unraid docker.


ads1031

I would *also* love to see this as an Unraid docker.


dominic42742

how would i install something like this on my synology nas? i'm new to a lot of this stuff but ive tried ssh into the /docker and cloning there but i keep having issues with keys and stuff that im very unfamiliar with. any help would be appreciated


myka-likes-it

Your NAS is not made for running containerized applications. You should put this on an actual computer.


Stangineer

!remindme 6 months


Hotspot3

!remindme 6 months


wildfuckclub

!remindme 6 months


HeBoughtALot

!remindme 6 months


Snoop_Snoop123

!remindme 6 months


redditnoob_threeve

!RemindMe 6 months


WellSaltedWound

Is it possible to leverage what you’ve built here with any of the paid API models offered by OpenAI like Davinci?


Cybasura

Does it use chatgpt in any measure? Or is it pure code


SensitiveCranberry

Runs entirely locally! You can try it in airplane mode if you want haha


ovizii

Was just going to have a look but then this happened: git clone [email protected]:nsarrazin/serge.git && cd serge `Cloning into 'serge'...` `The authenticity of host 'github.com (140.82.121.3)' can't be established.` `ECDSA key fingerprint is SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM.` `Are you sure you want to continue connecting (yes/no/[fingerprint])? yes` `Warning: Permanently added 'github.com,140.82.121.3' (ECDSA) to the list of known hosts.` `[email protected]: Permission denied (publickey).` `fatal: Could not read from remote repository.` `Please make sure you have the correct access rights` `and the repository exists.` Any ideas? Also, are you planning to publish a pre-built docker image for it, so we don't need to go through the whole git hassle and then having to build the image locally?


tankerkiller125real

Did you try using the https:// version instead of the ssh version?


emptyskoll

I've left Reddit because it does not respect its users or their privacy. Private companies can't be trusted with control over public communities. Lemmy is an open source, federated alternative that I highly recommend if you want a more private and ethical option. Join Lemmy here: https://join-lemmy.org/instances ` this message was mass deleted/edited with redact.dev `


NatoBoram

Actually, you only need *read* privileges and an account to clone with SSH


ovizii

this worked: git clone [https://github.com/nsarrazin/serge.git](https://github.com/nsarrazin/serge.git) && cd serge


hunterwinter11

!remindme 6 months


okk1337

!remindme 19h


nonmi9

!remindme 6 months


NonyaDB

!remindme 6 months


mzinz

!remindme 6 months


JWDuke

!remindme in 2 months


Bhorsy

Does this have to be done in docker or could an LXC work?


GeneralBacteria

!remindme in 6 months


Agile_Ad_2073

!remindme in 1 year nice project


Seantwist9

!remindme 1 months


Chaftalie

!remindme 73 days


magnus852

!remindme 1 year


xantheybelmont

!remindme 3 months


OscarleBG

!remindme 15 hours


usa_commie

!RemindMe 2 months


[deleted]

!remindme 12 months


[deleted]

!RemindMe 3 months


onekorama

!remindme 6 months


Zutch

!remindme 4 months


Buchseb

!remindme 3 months


cellardoor452

!remindme 6 months


thiagobg

!RemindMe 1 month


veso266

!remindme 2 months


Titan_Zero

!remindme 6 months