With time, pentesting develops into a somewhat predictable process. Years spent coding web applications now frequently result in web penetration testing assignments for me. While web penetration testing offers exciting diversity, I feel my specialization hasn’t changed significantly.
Web apps contain certain, almost predictable, complexities involving managing what can be executed in the browser, how APIs are a staple part of a system, however Windows applications? That was entirely new territory for me.
This case study explores my dive into Windows application security testing. This field values planning, but as many famous quotes suggest, the plan is left aside as soon as is deemed necessary.
Overall, no matter the medium, I follow methodologies and tactics, respectively Mitre ATT&CK. But as soon as I got my hand on the app or even a physical device, the workflow becomes as random as it gets.
First, the never-ending confidence problem
The first challenge was not technical. In cybersecurity, even though everyone knows about the many attacks happening, you still need to persuade people to enhance their security. This sales part is honestly something that I don’t enjoy much.
So, I was convincing the client that their application needed testing. Their confidence levels were through the roof, despite several red flags waving frantically in the background.
“We’ve been running this for years without issues,” they assured me. “The application is only used internally, and we’ve never had a breach.”
Famous last words, right? Internal use doesn’t mean secure, and “never had a breach” often translates to “never detected a breach.” as well.
However, I understood we are at opposite sides, ideologically at least.
Without boring you into details, the selling point was the fact that they needed conformities. Their app was untested for over 3 years. And we finally reached to a conclusion and an agreement to start with a few days of pentest. Let’s continue with the first pentest day.
As always, starting with reconnaissance
The first pentest day finally arrived. VM ready, installer ready, hoodie on.
I spent the first hour examining the installer. So I started with a black box penetration testing. Completely unfamiliar with the application and product, I just received the .exe Windows app and prepared a window 11 VM as a work environment.
Speaking of the Installer, I have to admit, looked promising and strong as a security standard. It came with proper digital signatures and certificates. For a moment, I wondered if their confidence might actually be justified.
But I did not give up that easy. Like a colleague of mine always says, “no app is unbreakable”.
Since ProcessMonitor would generate overwhelming logs for this application, I wrote a custom Bash script to track what the installer was deploying and where. I can understand someone saying ‘But in this case you can actually filter something or download X software that does this for you”. And I get everyone, but just keep in mind, I am a programmer at heart, so basically it took me less than 10 minutes to write both scripts, for scanning changed files and generating a diff of the fresh files.
A few minutes later, the script revealed the usual suspects: DirectX components, various SDKs, and… .NET 8 dependencies. So first finding it’s a C# application.
.Net 8, wow that was interesting. A fairly recent framework that should have modern security features built-in. After, I’ve tried to install with different options, and a lower impact vulnerability was found, because selecting a specific option from the install it would install a pretty old TeamViewer.
Still, nothing major. All I have found already was worth the time, but it was nothing to actually remember this project by.
I cannot emphasize how important reconnaissance is as a practice. People usually rush and try to break things. I like to do a thorough reconnaissance before I even think about attacking the application. I make some mental notes about the core technologies used by the application and their versions and got up to speed with any possible vulnerabilities or things to keep in mind. This was a Windows application after all, and I had 0 experience in programming these type of software.
First real win: The debug information gold mine
After playing around with the application’s interface and finding only minor vulnerabilities, I stumbled upon something unexpected. The application had a feature to download debug information, for troubleshooting, of course. And yes, I’ve triggered it.
What came back was a treasure trove of technical information that had absolutely no business being accessible to users. Configuration files, dependency listings, detailed stack traces all there in the plain sight. Even more concerning, I spotted security flags explicitly turned off in these files. Not to mention a curated list of all third party modules used by the application, with their version noted as well. With all the details in front of me, the work was effortless.
One particularly alarming flag was:
This single line indicated the application was vulnerable to deserialization attacks. Now finally, a critical finding that immediately justified the entire pentest and at least one more after this.
Time was not on my side, because I spent many minutes reading through and dissecting the debug information. It was no lost time because half of vulnerabilities came from there, but I started to feel like I am running out of time, and there is so much more to uncover. Truthfully.
Still, wanted to actually avoid sending false positives or report issues that cannot be exploited, then I prepared a malicious payload that would be uploaded inside the app, to test this deserialization vulnerability. Spoiler: I later used the credentials and successfully exploited the system! The app had a file-upload functionality that could serve as an app import/export, so it was the first module tested. With the risk of repeating myself over and over, I will not disclose technical details for now.
Without going into all third party plugins used by the app, one was of interest, a UI library used through the app. I’m holding off on further details until the client patches the system. Despite a lack of public CVEs, the core library’s age, almost two years old since the last update, raised a red flag. This time, I was actually running out of time, and relied on my good friend, OWASP Dependency-Check. And it found not only for that library, but for a SQL library as well.
Diving Deeper with DotPeek
Knowing from the reconnaissance part that the app was written in C#, the next logical step was to fire up JetBrains’ DotPeek to examine the application code. This is where things went from “concerning” to “oh my goodness.”
The DLLs were completely unobfuscated, so no need to even reverse engineer. But that was just the beginning. Buried in the code were hardcoded database credentials in plaintext. Statistically knowing how many breaches occur because of this, I did not get so surprised seeing the password was not even hashed.
This easy vulnerability was the thing I needed to go analyzing the app’s code and started exploring the authentication area.
And then I found it – the backdoor. Something that the developers had intentionally created, likely for “support purposes.”, or just because the software being able to install offline, they needed some functionality to reset master password. This I can understand and agree, but not how it was implemented. It completely bypassed the authentication mechanism when triggered with certain parameters that were so easy to guess. You would expect a mathematical deciphering of the enigma machine from World War II, not an almost-guessed algorithm from the backdoor.
Lessons Learned
This pentest had a few lessons for me as well:
Confidence doesn’t equal security
The most confident clients very often have the most vulnerable systems. It’s like the Dunning–Kruger effect, but for security.
Windows applications have unique attack surfaces
They require different testing approaches than web or mobile applications. This was something new for me. From the knowledge of pentesting web application, I actually felt like nothing was transferable here, only maybe the suitableness to search and find vulnerabilities.
Finding vulnerabilities does not mean that they are exploitable
This is a generic finding that needs to be considered from the security tester’s point of view. Sometimes time pressure means not being able to verify each vulnerability, and we don’t want to miss or not mention all we can find. But trying to break the application using the vulnerability found should be mandatory. At least to point out to the developers how the vulnerability can actually exploit the app. This means adhering to the law, and contract, when trying to break apps.
Modern frameworks don’t guarantee security
During testing, I found a relatively new framework that had several security checks. I almost got paranoid about being able to bypass. I discovered that security flags had been removed in many areas.
Everyone makes mistakes
The most important one. And as a programmer, I actually understood why some vulnerabilities were present:
- Having backdoors. Again, as presented above, that there was a need to implement this feature is understandable. But not by implementing a backdoor that if leaked on the dark web would affect basically all the users of the application, where it was installed also. A backdoor in this case should be created using a self-signed certificate. And let’s say a malicious actor breaks the app, only that certificate and app is affected. Not every app installed on this planet.
- Unmaintained third party app. Understandable as well, time flies when delivering, and updating plugins can be a full-time job when the app becomes complex. However this is the cost of using dependencies and not coding them by hand. You have the increased rapidity of delivery, at the cost of maintenance.
Photo by Microsoft 365 on Unsplash