A short (and fairly common) story of how quick and dirty initiatives to deal with security weaknesses can actually land you an ordeal of problems and eventually get your systems compromised.
Our Penetration Testing team found itself running an engagement just after a PCI GAP analysis was performed on the target environment. At the end of the testing the report contained mostly your run-of-the-mill SSL findings along some other issues – of lower significance – and no direct compromises. However, one of the key findings of the PCI GAP was that this organisation was missing a central log collection facility. A few months down the line, a full Pen Testing retest was requested due to important changes to the environment that were introduced… among them, there was the introduction of a new SIEM solution. And that is where things got really interesting.
Obliged by our Pen Testing methodology, the first steps of the retest quickly identified a newly available network service, offered on port 8400/TCP. We quickly identified this was used by ManageEngine EventLog, a SIEM solution. This was the third time this year we had come across this particular solution on an engagement, with the last two resulting in critical vulnerabilities (View Blog) being reported to the solution vendor (ManageEngine). So, with luck on our side, we knew exactly where to look and how to compromise the solution, with the condition that the built in guest account had to be enabled (this is a default setting), and it was indeed. Most of the security guidelines and PCI compliance documentation publicly available – for any product, not just ManageEngine – suggests that sysadmins must disable guest credentials and change the default password (or username) of any default administrative accounts. This is certainly a hard requirement within the PCI DSS.
However, this guidance is often overlooked, most times due to administrators considering the level of damage a guest account can do is little to none… after all, what damage can a guest user do? Apparently a lot.
Ten minutes later we had compromised all servers attached (submitting logs) to the SIEM solution. It turns that the SIEM solution relies on using a local account on each client to connect to them and retrieve the logs. From there, further data and credentials were pilfered until we had unfettered access to more systems and databases that we knew what to do with. So this customer went from a no-direct compromise position to a complete compromise of all systems and data by attempting to actually address other security weaknesses.
This may seem like a pat-ourselves-in-the-back kind of post, but it actually contains a few important lessons that we want to share with the world:
- Do your homework before choosing a security product, especially one that will support such a critical and far reaching function such as log collection.
- Seek professional advice if you do not have the means or time - internally - to properly deploy solutions. This is critical in order to reduce your organisation’s attack surface and exposure.
- Within our industry (Security), the devil is always in the detail. Regardless of what other vulnerabilities may have existed on the target systems, a simple configuration setting would have firmly closed the door on the victims’ computers as there would have been no avenue for us to exploit.
Regarding the first bullet, one simple step anybody looking for solutions on the market could do is to interrogate vendors in relation to how they manage security internally and within their products. Any kind of proof that they can be considered as a serious organisation doing their due diligence. That could be a number of things, such as periodic source code reviews, architecture reviews, penetration testing of their infrastructure and applications, or at least a general description of their software development practices, where hopefully security controls can be quickly evidenced.
A relatively easy, but possibly less accurate alternative one might take is have a look at the vendors’ track record in terms of disclosed vulnerabilities, and how they treat or respond to security researchers. Bugs and vulnerabilities are part and parcel of computer engineering, but the difference in the way vendors react and respond to security findings can be very telling. Most vulnerability disclosures include a timeline which starts from the time the vulnerability was originally identified and reported, through to the time it is publicly disclosed. Having many (or even any) that say “Vendor did not consider this a vulnerability” or “Vendor unresponsive” does not paint a nice picture.