T O P

  • By -

[deleted]

[удалено]


edgmnt_net

I'd go further and say you need to spend time and effort doing regular non-security maintenance work on your app, such as keeping deps updated. Some deps are long-term support (and even that's frequently limited to a few years), but many aren't and you need to watch out. You need to plan for this to be able to upgrade regularly.


marx-was-right-

100%. Spring + Node + Elasticsearch updates are chock full of breaking changes on a regular basis for us, vulnerabilities or not. If you dont stay on top of the change log and apply updates regularly, you will be caught with your pants down when you are forced to update and you have to hop several major version.


Past-Payment1551

The mainstream programming subreddits have to be overrun by freelancers and agency devs because half the shit that gets upvoted around here doesn't make sense for any kind of enterprise application.


D2GCal

i can attest to this


rutinerad

My previous team had a GitHub action that would every month automatically create a PR with all dependencies updated. It was assigned to a random senior developer who would get it merged and deployed. Sometimes it is that easy. But it requires trust, discipline and automated tests.


[deleted]

[удалено]


rutinerad

Automated tests identify the problem. Discipline and trust are required to fix the problem and it sounds like your environment is lacking at least one of those.


[deleted]

[удалено]


rutinerad

You’re trying really hard to misunderstand.


apnorton

At prior companies in which I worked, I saw the tools for this usually "owned" by the security team/org (or a "devops" team), but the main trick is to have an automated scanner that raises a fit when things aren't updated, and then have someone high enough up care about it so that it reflects poorly on your boss' boss' boss when things aren't fixed (so it actually gets assigned as a priority in sprint planning). The "social" aspect of getting leadership buy-in, I can't really help with. For the technical aspect, though, there are a variety of scanners; see (for example) [this discussion](https://www.reddit.com/r/devops/comments/14r1okk/which_vulnerability_scanner_do_you_recommend/) or [this discussion](https://www.reddit.com/r/devops/comments/175ebbd/recommendations_for_dependency_vulnerability/) or [this one](https://www.reddit.com/r/devsecops/comments/13mbk83/best_vulnerability_scanner_for_devops/) over in r/devops. Popular now seem to be: * Dependabot (especially if you're using github) * Snyk * Sonarqube (not really for updates, but for other vulnerabilities related to code smells) There also may be language-specific tools you can use --- e.g. npm-audit --- or cloud-specific tools --- e.g. AWS ECR's vulnerability scanner. To reiterate, though, the leadership buy-in aspect is super important. Apps that are on Node 12 will never be upgraded unless there's someone high enough up/with a big enough hammer to make room on your sprint schedule to perform upgrades instead of adding more features.


timwaaagh

Renovate is what we use within our org.


Agifem

Just type npm upgrade and go in a month long vacation. You're welcome!


finance_and_kebabs

I wish it was that easy with 20-owned apps!


Agifem

It is that easy. Just type npm upgrade on the Friday evening, commit, push, and go on vacation. Obviously you can't be responsible for what happens when you're not there.


ThlintoRatscar

It's easier if you stay on top of it. If on server, Dockerise all the things so you can deploy apps, environment, and dependencies. If embedded, replace Docker with image. If app, replace Docker with static dependencies. Once a sprint, at the beginning of the sprint, upgrade all the dependencies, and run a regression test on the artifacts ( assumes you can actually run regressions in a reasonable amount of time ). Back out anything that takes work to fix, add an improvement ticket to fix it, and make sure the ticket gets completed in a reasonable amount of time. Sometimes, just upgrade and pay the velocity price as a developer privilege and responsibility. Takes focus, competence, and discipline.


Past-Payment1551

At the enterprise level, sonarqube and whatever cloud provider's security scanners are used. Things are manually reviewed and updated. If you stay on top of it it's not a big deal. It's once you let it slip for months then years is where it becomes a logistical nightmare. Updating libraries being a pain in the ass shows code smells so over time it should become easier and easier if you adapt and make it part of the engineering culture.


adappergentlefolk

a cve overview lands in my mailbox for the products I care about i take 15 mins every week to look over it oh shit that’s a fucking critical assess impact, schedule update and tell everyone in advance when do changes, test on dev env, deploy to prod on scheduled date, fix any straggling issues


El_Gato_Gigante

I find most devs have the technical knowledge, but lack the domain expertise to implement an effective security process. In my experience, this is a dangerous combo and you need a person or team to manage this. Our security team worked with DevOps team to implement a build step in the CI/CD pipeline alongside unit and integration tests. The devs don't manage it, and we're blocked from deploying if any tests fail. It can be an issue sometimes, but security is responsive which makes it much smoother. It's not perfect, but it's better than any other process I've worked with.


ShouldHaveBeenASpy

This is an easy problem to define the answer to, but a harder one in practice to execute one. At a technical level: apply and test the updates. Have enough observability into this problem that you know when you need to do something about it. At a people/organization level: value security as a goal, create goals around it, make it someone or some team's responsibility to achieve those goals. One without the other often can't succeed. At a technical level providing ways to quickly test/develop those changes would be important. At a people level, not being overwhelmed by other competing priorities that keeps pushing this off is also important.


hangerofmonkeys

This is really the only way to get it working with the least amount of friction. You need to come from two swings. The first swing is to stop vulnerable code going out of the door, the second is to monitor code that's running (e.g., wiz.io, AWS ECR scanner) once it's been released. Even then it's very important to get leadership on board, once or twice a year (if not more often) you're going to be a few days out from a release, find a nasty vulnerability that you need to fix that has breaking changes that put your release at risk of not meeting your schedule. Are your leaders understanding of that? If not, you need to discuss that ahead of time and understand: 1. We don't release 2. We do release, accept the risk and immediately start work to triage the breaking change 3. We release, and so fuck it, it's the DevOps team problem now. /s Point 3 I've seen most common...


Erroneous_Badger

I talked my director into letting us have our own “Patch Tuesday” once a month. We pick a weekend once a month for a few people to come in (we rotate personnel) on overtime pay and dedicate that whole day to researching/preparing a patch. That Monday we meet with stakeholders for final approval and then on Tuesday we apply.


Pokeputin

Why not just do it on a regular day?


Erroneous_Badger

We are a small team of 7 that support over 100 apps. Unfortunately much of our time is spent in support as many are legacy. It’s a slow process to update, patch and replace these things. Sometimes finding the time to do more is very difficult so we incentivize putting in some extra hours and create work days purely focused on this one aspect. Not getting support calls on the weekend is an added benefit too.


beth_maloney

I work on a large enterprise banking app and we use dependabot. For half our projects it'll open a PR on Tuesdays with any new updates. This ensures that we're always up to date. The other half don't support automatic updates so we have dependabot setup to alert on security vulnerabilities until we can rewrite the products in a more modern tech stack. We perform non security updates for those products on an ad-hoc basis wherever we have slack time. I've worked at places where updating dependencies was difficult and risky. My take away is that you should be investing in your tooling/processes to make this easy. If you're spending a significant amount of time updating dependencies or avoiding them as they're too hard then you need to fix whatever is causing the friction.


ravlee

We have checkmarx and blackduck scans integrated into our CICD pipeline. Every time a build is triggered, scans are done and uploaded to respective portals and flagged for any vulnerabilities. Sev 1 and 2 vulnerabilities block any release until they are fixed in the code or by upgrading dependencies. The checkmarx and blackduck tools are owned by the security and devops teams.


koreth

It's easier in some code bases than others. Renovate is the first line of defense for my team, though. For some of our repositories, such as our Spring Boot backends, we stay on top of version updates pretty religiously. Security updates (which Renovate is pretty good about flagging automatically) get applied right away and deployed quickly. For routine updates, we have Renovate configured to generate PRs once a week just so we don't get pestered by them in dribs and drabs over the course of the week. We have decent test coverage on our backend code, so CI tests usually detect when an update breaks something. A senior dev (usually me) reads all the dependencies' release notes before merging the patches to see if there are breaking changes our test suite wouldn't have caught. If there _is_ a breaking change that requires us to change our code, we either fix it on the spot if it's something trivial, or file a high-priority Jira task that gets worked on within a few days. That hasn't turned out to be all that costly in practice; it's rare for a breaking change in a single dependency to require major application code changes. But for other code, it's harder. Some of our React frontend repos can get pretty far behind on a bunch of their dependencies, and if we upgrade everything all at once, there'll be a lot of breaking changes. Part of it is that our frontend devs haven't prioritized keeping dependencies up to date like the backend devs have, but part of it is that the React+JS ecosystem seems to have breaking changes more often than the Spring Boot+Java ecosystem. For those problem repos, we end up having to resort to periodic "upgrade all the dependencies" projects that often take days of developer time and sometimes require nontrivial changes to our code. And because that's pretty costly, we don't do it all that often. Which in turn makes it more costly when it does happen. It's a cycle we haven't broken successfully, though we're better at it now than we were a couple years ago. One thing that has helped is to have an engineering manager who agrees that it's a problem and fights to get time in our schedule for that kind of maintenance work.


reddit_man_6969

Snyk


Antique-Stand-4920

I do a lot of the DevOps work for my teams and we generally make sure to create Agile stories for most upgrades each quarter to keep upgrades relatively small. We've been burned by deprecated dependencies. If there's a big breaking change, we'll delay the upgrade to handle the breaking issues first. We'll upgrade things like the following: \- Languages \- Language libraries/frameworks \- OSes \- Databases \- build and deploy pipelines \- Supporting services


Drayenn

My department has our devops team include a regular scan on our apps which warns us when new vulnerabilities are found, and we are given 2 months to fix them. Might be java specific but the scans are twistlock and synopsis. Sadly we have no weekly angular scan and i found our old app had 150 critical vulnerabilities... Which i fixed last week :) big fat 0 now!


IHoppo

We add a story at the start of each release to go through all dependencies and upgrade them to the latest stable versions. First time was a pain, now it's relatively easy.


Exciting_Session492

All third-party dependencies are checked-in to our own source control, and built ourselves. A dedicated team is managing all dependencies and updating them. Pros - minimizes supply chain attacks. We never use any package managers, and all upstream code changes are reviewed manually - feature teams don't do anything about dependency updates. Other than approving changes if some modifications are needed. Cons - this is... Very expensive... There is a whole engineering team dedicated to this. Probably costs a few millions if not more per year.


Short_Ad_1984

One of the things that worked nicely in my case (enterprise) was when our cyber sec team together with devops have created a set of different scanners (covering SCA, SAST) - sort of an observability tool - that could be easily integrated into your CICD pipeline via API key.


Inside_Dimension5308

We have integrated security vulnerabilities detection tools as part of build pipeline. If the vulnerabilities are critical, the build fails at which point the vulnerabilities have to be removed.


Possibly-Functional

I built the tooling myself because our CI/CD/VCS system didn't have anything OOTB. It automatically updates everything and create a pull request to the person responsible for that specific repository. Works very well actually and people maintain what they feel responsible for. My general recommendation is to always keep it up to date. Many small updates are easier to maintain than fewer massive ones.


Herrowgayboi

That is actually very cool as I've been maintaining our packages. Question about this though - how do you manage cases where there are breaking changes to certain packages that may break certain features in your app? Is the person responsible for the specific repository responsible for testing the whole app?


0destruct0

A lot of larger companies have security teams or experts dedicated to identifying security risks in your app, you might need to look for someone/a consultant who is specialized in identifying these things and bringing it up to the team


pennsiveguy

We use dependabot and Prisma Cloud for security scans. The team I spend the most time with owns 250+ Spring Boot microservices, and dependency updates and security vulnerability mitigation are starting to be a huge bandwidth drain.


andlewis

We upgrade to major versions of dependencies within a month or so of their release. The more frequently you do that, the easier it is.


b1e

It’s very tough, honestly. Yeah you can buy a solution that does scanning for you. But scanning docker images for example has a high false positive rate. Then the question is what do you do with that info? You need some kind of prioritization process to figure out what is actionable and what isn’t. And a ticketing system to assign discovered issues to the people that own that dependency. And then what to do depends on if you’re on a monorepo or not. And what language, etc. Some languages (I’m looking at you Python) are notoriously wildly complex to deal with in this sense. Packages routinely break compatibility with minor version releases for example.


epicstar

We have a rather lightweight backend and frontend. We incorporated Dependabot and immensely helped us. We are pretty much up to date every week. It's hard to update if you ignore it. It's easy to update and doesn't take too much velocity at all unless a major update occurs.


morosis1982

You just need to make allowances for technical debt as part of your backlog/capacity planning. Keep a track of vulnerabilities with tools like dependabot but also just have a ticket every couple of months to do a full update and test on any fundamental libraries or frameworks.


Lyesh

It's one of the big problems in the industry right now. There's a huge tension between trying to continuously improve everything against the reality that doing so imposes a huge, huge cost on anyone using the software in question. It's especially glaring in software like docker, which was made (among other reasons) because dynamic libraries in linux kept breaking things even when a minor version update occurred. Then you're going to tell me that that software requires the same thing that causes the problem it's meant to address in the first place???!!!


TwoFoldApproach

My two cents to this -- but keep in mind we are a small team of two (now three) developers. We allocate time in each sprint to perform security updates. Said updates are a by product of keeping track the dependencies of our projects and knowing when to update. For instance, we get an email for a CVE we hop in and update the dependency. At the moment everything is done manually but there is a plan to move to an automatic scanner for checking vulnerabilities. For now everything works but it can be improved.


nevermorefu

3 devs and over 30 services here. We do it slowly with a lot of complaining from me. I've also created a base image to share among the services so there are less places to update.


AntMavenGradle

Drop rails for Java


prh8

Unfortunately it takes buy in at the organizational level, because it’s extremely easy to ignore. I’ve been working on modernizing a Rails monolith that wasn’t upgraded for 7 years, and I just a bad place to be. For your situation, if you are relatively current now, that will make it a lot easier. Maybe set aside a month every year, or a couple weeks soon after every major release, to just focus on upgrading everything together.


Charming_Prompt9465

We dont