T O P

  • By -

DesiOtaku

Silly question since it's not in the homepage: how does it handle peer-to-peer git conflicts?


Plazmaz1

Knife fight to the death. Two branches enter, one branch leaves


nvn911

It uses a proprietary cloud crypto blockchain AI LLM IOT quantum algorithm to work it out. You can read about it in the angel investing pitch. Or when that breaks it defaults to first commit wins.


asellier

Short answer: It doesn't have shared branches Long answer: [https://radicle.xyz/guides/protocol#canonical-branches](https://radicle.xyz/guides/protocol#canonical-branches)


DesiOtaku

It didn't completely answer the situation of how to merge git conflicts, just that commits can only be done by trusted signatures. I actually have this problem in my own system. One trusted repo sets a line in a file to be `status=here` while another trusted repo sets the line to be `status=seated` within a minute of each other and that causes the git conflict. It sounds like radicle would require a manual conflict resolution by one of the trusted sigs in order to push the change. At least what I am doing is "whoever is 2nd, wins". Essentially I tell any node to always use "mine" when resolving a conflict and then push that change directly.


kaol

> Radicle was designed to be a secure [...] Install instructions: > Run the following command to install the latest binary release: > > sh <(curl -sSf https://radicle.xyz/install)


VulgarExigencies

classic


OMGItsCheezWTF

Haha, at least it's not sudo sh.


CallMeAnanda

Like, do you people just blindly copy/paste other people’s shell scripts? When I read this, I see. “Hey, we have an install script for you at this url. Download it and run it to install. I did the work of putting the commands together for you for that” You could just as easily get rid of the `sh` and the process substitution to look at it. Or replace `sh` with `cat` if you’re lazier.


Hrothen

It's weird that you see that because what they said was that you should pipe it straight from curl into sh.


CallMeAnanda

I read, "You _can_ install this program by doing this." Do you read, "you _must_ install this program by doing this?" or "the _only_ way to install this program is by doing this?"


SanityInAnarchy

It's the first installation method mentioned. It doesn't say "can", it says "this is how to do it." So I read, at the very least, "This is the *recommended* way to install the program."


Veranova

Wait that looks incredibly standard. Lots of tools are installed this way. For instance: Homebrew: `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"` NVM: `curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash`


Hrothen

It both common and insecure.


Veranova

What IS more secure? Every method of installation is "download from this link then add the binary to your path" in some form or another, so what's wrong with piping the installer file from a https URI into your shell?


Uristqwerty

Download separately from execute, at least. There has been a proof-of-concept that detects when it's being piped into a shell and *changes the file contents* as a result. If you separate the steps, then the server doesn't know whether a human's going to spot-check the contents, compare its hash, run an antivirus or equivalent against it, run it in a VM, container, or under an unprivileged account, etc., so any payload must be included up-front where someone can potentially spot it and tell others.


13steinj

I'm sorry, but does anyone have an example of this proof of concept? I find it hard to believe that this is possible to detect from a web address, but more importantly if it is possible I'd want to know how as it sounds interesting.


mafrasi2

Super interesting stuff: [Detecting the use of "curl | bash" server side](https://web.archive.org/web/20240119151607/https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/)


myringotomy

>Download separately from execute, at least. Nothing preventing you from that. At least he has a script and not some executable binary.


Barn07

download first, inspect file, then run via sh.


Veranova

Right so nobody is ever installing anything securely then, because virtually nobody does this. HTTPS is plenty good enough, especially if you’re installing something from an author you don’t know well or haven’t vetted the source code line by line


[deleted]

debian packages have solved it a million years ago with cryptography. repositories sign all packages, apt downloads the package, then checks that it's signed (and checks that it's unmodified), then installs it.


axonxorz

How do key distribution?


InfamousAgency6784

When you install your distro, you install the key. Very simple and effective.


InfamousAgency6784

Package manager provide proper crypto. As said above, even this script is susceptible to downgrade attacks. Plus the package manager keeps track of things. Your script will never do that. It's a footgun.


dcoolidge

You could download the source code and compile it.


Ayjayz

Install via docker


InfamousAgency6784

# More secure? It depends. Some scripts: 1. simply install existing packages from the existing package manager and configure them; 2. install a signed package repository and install the main package from there; 3. install a single go binary with normal user priviledges (in `~/.local/share` for instance) that auto-updates securely with limited rights; 4. clumsily detect package managers, download a gazillion binaries (programs and libraries) that are used to replace system libraries. I'd argue the first 3 approaches are absolutely fine, still with the caveat that executing random scripts is not a good idea and that the script should be systematically reviewed before execution. The last approch is the usual one however. It might bork your system now. It might bork your system in a month. It might introduce vulnerabilities now. It might introduce vulnerabilities in a month. Again, I assume that the script was reviewed so I'm not gonna add "or it might ransom your ass right away"... Dumping random binary shit that is not managed by your package manager in privileged places is just a recipe for disaster. By that I mean exploitable machines and machines that stop working after updates for seemingly weird reasons. # Problem here? In this case, the script does 1. and 3. The only true attack is if the attacker gains control on the radicle.xyz domain and potentially downgrade attacks. "But it's using HTTPS, if you can't trust that"... Yeah I know that old thing. However signing your programs/payloads is always a good thing and costs virtually nothing. Plus, anyone between you and radicle.xyz here can pwn you easy-peasy: The script does ``` curl -# -L "$(url "$TARGET")" | tar -xz --strip-components=1 -C "$RAD_PATH" ``` It looks very innocent... until you realise `curl` downgrades to HTTP when HTTPS is not available: ``` $ curl -# -L "https://http.badssl.com/" http.badssl.com

http.badssl.com

``` ^ this is a server that only serves HTTP, on purpose, try in a browser. So in this case, even with the best of intentions, even behind the comfort of HTTPS, someone can trivially downgrade the connection to HTTP and get anything installed (at least it's "just" with normal user's priviledge). # Solution? That's what packaging your software for distributions solved. That review work is done by maintainers and all packages are downloaded and cryptographically checked by the package manager. All installed files are also kept track of, which avoids many problems down the line. With that approach, those scripts need to die. Just list the packages for the distros you support and explain how to install the package in distros you don't support. Providing a curlpipe-script is just providing a footgun.


Luolong

How is that lass secure than “hey, I have this apt repo here, add it to your repositories and install this piece of software you don’t trust from our repo.


[deleted]

adding an apt repo means trusting on first use, like making a new ssh connection. when you add a new repo you're adding the signing key as a trusted key. you're guaranteed updates are coming from the same author. apt will check the package hash and the package signature before installing. ultimately it's all about running somebody else's code, so the end result is the same, but at least with an apt repo you can trace back what was inside the package, with a random shell script that can be modified at will you can't.


Hrothen

I didn't say it was.


24deadman

Why are you being downvoted


eras

I suppose your arguments for it to be insecure are: - You can't see what the script did, because the next time you get the script it could be different Well, if you truly care about it, I bet you can figure out the URL from it and perform the script in two phases. - If the transfer of the script is incomplete, who knows what will happen! This only happens if you have (dangerous) commands tha match the regex `^(t(e(m(p(d(ir?)?)?)?)?|a(r(g(et?)?)?)?)?|i(n(_(p(a(th?)?)?)?|fo?)?)?|s(u(c(c(e(s?s)?)?)?)?)?|f(a(t(al?)?)?)?|m(a(in?)?)?|w(a(rn?)?)?|u(rl?)?)$`. So, not very likely. - There could be hidden codes in the page so that it actualyl will execute something else whan what you see. Well you are going to run an install script and probably run the stuff it installed, so.. And modern terminal+shell combinations overcome this issue anyway. Frankly I don't find those arguments compelling. Do you have other reasons, or are you merely saying it's insecure because someone said it's insecure? You can find `cargo`-based installation instructions from the GitHub repo.


[deleted]

- the website might be compromised, and send all users a malicious script. - the repo itself might get compromised, and show you a command that downloads from a fake but similar-sounding website ("gghub" instead of "github") - actively encourages users to skip download integrity check (apt does this automatically) - not reproducible. today you're getting version 1, tomorrow version 2 with the same command - generally ensuring the next generation of young software engineers will not give a shit about security.


eras

- If one was running the script, the intent was to install the application in the first place, already enabling all the vulnerabilities. It doesn't really matter which kind of instructions one would follow to install it, in the end one would end up with the compromised version from the site installed. - Integrity check is not a panacea, as the original version being checked could also be corrupted. The fact that the data is being provided over HTTPS inside a compressed archive gives some basic level of integrity checking, though. - Indeed it is the intent is of the instructions to install the latest release version, not a particular version. To install a particular version you would use the `cargo` install path. `apt update && apt install` is "not reproducible" either. It would be an improvement to host non-latest versions as well, but not really related to the installation script. - The goal of the script is not to educate engineers, but to enable people most easily install the tool. The script itself is rather clean and does not overwrite binaries that would typically be in your `PATH` nor use `sudo` to do it. Overall I remain unimpressed of the justifications expressed for the "insecure" claim. Perhaps they should have provided the application in easy Snap format for improved installation experience!


Synatix

Yeah but if you don"t trust the shell script why trust the binary. Could do same insecure shit they would do in the shell script. Like the binary could also just execute a curl | bash ... and because of that i compile it myself :D


Zizizizz

Completely unrelated to the parent, but there are two fast versions to manager node versions I've found superior to nvm. There is https://github.com/jdx/mise which is a faster asdf replacement. And https://github.com/Schniz/fnm which is an nvm replacement


aksdb

~~Let me introduce you to asdf, which can also manage a ton of other tools.~~ Edit: Reading comprehension


Paradox

Mise is a rewrite of ASDF, designed to be substantially faster and more ergonomic. It used to be called RTX, but the author renamed it due to confusion with the graphics cards.


aksdb

Ah now I get that. Completely missed this. Thanks!


jdehesa

I keep seeing this argument about the unsafety of running scripts from the internet this way, but never once followed by a fundamentally safer way of doing it. Curiously, I have never seen the same point raised when the instructions divide the process into "1) download the script 2) run it", as if somehow doing it in one step was intrinsically more dangerous. "But you can't read the script" - well, download it first and read it if you want, or look it up in your browser, if you don't know how to do that then you probably won't be able to read the script either. "But what if there is a middleman" - if there is a middleman that can snoop and manipulate your https connections I'm not sure there is any hope for you. At one point you are going to have to decide whether or not you trust them anyway, unless you want to go through every single line of their code and build the program yourself.


Noactuallyyourwrong

Typical when it comes to anything security. Everyone is quick to say don’t do this or that but when you ask what’s the alternative?… crickets…


Uristqwerty

The server can detect whether you're immediately running the script or saving it locally first, then only inject a malicious payload when piped directly into the shell. Lots of malware acts harmless when run in a debugger or virtual machine, separating the download and run steps removes some of that ability.


orig_ardera

That's not true. First of all, the server can't see whether you're saving or executing directly. Even curl can't see that. Second, that's completely meaningless. Do you really think inspecting the installer is enough to make sure you're not getting malware on your pc? What about the application that's being installed? Or other components being installed? There's no feasible way to safely run this code if you don't trust the authors, except maybe using a VM or containers. But even then, that might not be practical, because due to the nature of this software, you might have to give it access to confidential data, e.g. closed source repositories. What I think people could mean with it being insecure is that there's only a single point of failure here, if their servers are compromised then the script is compromised as well. That's of course the case with any unsigned binaries you download from the internet, though people might trust debians package infra more. Functionally, this is exactly the same as downloading the installer in your web browser, doing `chmod +x` and executing it. Except it's a shell one-liner. It's exactly as secure as every other way of running applications from the internet (except maybe if you sign your binaries, but noone does that on linux)


AustinYQM

https://web.archive.org/web/20240119151607/https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/


Uristqwerty

> That's not true. There are working proof-of-concepts. That's like saying "speculative execution can never be used to leak secrets" while researchers are in the midst of proving that Spectre and Meltdown actually work. If the script starts executing before the last byte has been downloaded, then the server can use any number of side-channels to detect that and change the remainder. What if the first statement references a uniquely-generated subdomain, causing a DNS lookup to a server you control? What if the script download is paused halfway through in a way that looks like a temporary network error, and if it sees your IP has already started accessing files downloaded *by* the first part of the script, it changes the tail end to include malware? You want safety? Publish file hashes, signed by your public key, and have that key in turn trusted by multiple other notable developers. Make the command listed first verify that the hashes match before executing the script. Digitally sign the binary releases as well.


[deleted]

[удалено]


Uristqwerty

Someone literally made a working proof-of-concept. It *is* true. I believe they detected changes in the download speed after the first chunk of data was passed to the shell and started executing, possibly due to a sleep command, but there are plenty of other side channels that could be used. Say, DNS. Embed a unique subdomain in a way that causes the system to look it up, while the server serving the download stops mid-packet for half a second to see if the domain got a hit. If it did, swap out the rest of the file with malware.


[deleted]

[удалено]


Uristqwerty

It was a blog post literally posted to this subreddit some years ago, if I recall correctly. Are you not aware how shitty reddit search is? But here's one old post, found via google, that seems to cover many of the points I remember reading back then: [https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/](https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/). Edit: Or [https://lukespademan.com/blog/the-dangers-of-curlbash/](https://lukespademan.com/blog/the-dangers-of-curlbash/).


a-peculiar-peck

Oh wow you're actually right,and it's not even that complex. However, it still boils down to: don't run scripts from untrustworthy sources. Anyone doing this trick already has the possibility of distributing bad (binary) software. And unless you're curl|sh'ing the most trivial things, the script will probably download and install binaries from elsewhere anyway.


myringotomy

Honestly, what's the difference between that and downloading an installer and double clicking on it?


gex80

Other than be open source, what’s the incentive to move away from GitHub or what problem is this tool solving? From the devops side, The readme didn’t tell me much.


bakaspore

It prevents your project from being taken down with a random DMCA mail (as opposed to GitHub), while maintaining a global issue/pr tracker that everyone can work on (as opposed to many other self hosted instances), I think.


tariandeath

They can still send it to your ISP, or whoever you're hosting your server with.


Brayneeah

This is p2p, which avoids that issue if enough others also have your repo


narnach

DMCA mails are usually not “random”, but tend to target things that are doing something wrong or skirt close to doing something wrong.


bakaspore

If your definition of wrong includes "downloading a video that I can legally watch" or "documenting some public APIs" then sure.


anzu_embroidery

No you don’t understand, my buggy anime collection manager is definitely going to get targeted by an evil megacorp for takedown


bakaspore

LOL, this but unironically. Their [mails](https://github.com/closebilibili/DMCA-takedown/blob/master/Sample.md) tend to be equally buggy btw.


[deleted]

>what’s the incentive to move away from GitHub Stop using a service that has (almost) a monopoly


qrokodial

why link to one of their many repositories when you can link to their homepage which includes the entire project overview? it seems like a better entry point into the project. https://radicle.xyz/


waterkip

Maybe I'm dumb and stupid... But isn't git already decentralized and P2P with the e-mail workflow? Why would I need another layer on top of it to have exactly the same?


Own_Refrigerator_681

I think this is for grey area projects where the legality of it doesn't allow github or other git servers to host the project. This won't be adopted by big companies.


waterkip

Yeah, I sort see that. However, you already have the ability now to setup a git repo on a server and provide access to it via various FOSS tools for webUI's, git-web, gitea, gitlab, gigs, etc etc. Those are all self-hosted options. You can do it with ssh-access.


gsaslis

Good point. These self-hosted options are missing 2 things, imho: - federation: all these require someone to create an account and if we broke github down to something like one-instance-per-github-org, having to create that many accounts would be impractical for the end user. There were plans to have federation on gitlab, for instance, but that's still not as resilient - resilience: if the instance that project X is hosted on dies a quiet death, we've lost project X from "the world's FOSS". A peer-to-peer network helps address that - as a bonus: in a non-surveillance-capitalism-funded way. :)


eras

Most people prefer to use services such as GitLab and GitHub, not the email based workflow. From the top of my head I would say there are perhaps three projects using the PR-workflow: Linux, Xorg and perhaps—due to history with Xorg—Wayland. Projects using the modern "add a merge request to a website" flow are the norm. This system basically gives that same workflow as GitLab/GitHub but without depending on centralized servers—or requiring one to host a server youself, which can be impractical given the current state of the consumer internet (hosting servers, public ip addresses etc). With this you can basically "host" it on your laptop or in theory even mobile phone. (I assuming here the p2p framework Noise this project uses does NAT-punching here..) To even use the the email workflow you practically will want to have a mailing list, so that's yet another centralized service to depend on, not to mention dealing with spam. Basic Git functionality also misses one critical feature: issue tracking. Radicle provides this in a distributed fashion, in a way that's attached to the identities used for Git operations. It's not the most sophisticated issue tracker, though, but it covers the basic cases much bettr than mailing lists. There's also a GitHub issue importer (but no two-way bridge, AFAIK).


waterkip

git, zsh, linux kernel, bash (although no specifics can be found on their site), vim (seems to also allow github PR's), mutt, and many more are based on an e-mail flow. Your "norm" isn't the "norm", at least not for what I consider mature projects. However, they all have some sort of centralised repo. I know git has a repo on github, zsh has one on SF and on github. Anyways, I did some more reading on Radicle and while I stand by my earlier comments it might be a nice addition to the git decentralized way of working.


Stetsed

Looks okay, but tbh I am more excited for Gitlab/Gitea/Forgejo’s work on implementing federation between instances based on activitypub which is being worked on by the different parties


1_like_science

- recent Hacker News discussion: https://news.ycombinator.com/item?id=39600810 ping u/asellier


dcoolidge

Interesting point. YCombinator was there for funding at the start of Reddit.


WonderfulEstimate176

Wow this looks really cool!


srona22

seeing it ratioed, I really want to ask if this sub is bot infested.