Monday, January 31, 2011

Taking a look at what ICS-CERT thought about 2010

When I sat down and read the document entitled ICS-CERT - 2010 Year in Review recently published by the U.S. Dept. of Homeland Security's Control Systems Security Program, I felt that it offered a good chance to reflect on some of the important lessons we all have learned in 2010 that may not be as obvious in this document.

No one can argue that 2010 was indeed an unprecedented year for anyone who works with industrial control systems, as it demonstrated proof that our systems are not isolated, they are vulnerable to attack, and there are people that want to exploit them! However, it would appear that beyond this rather obvious point, there is still some rather large gaps between the views of ICS-CERT and those of us that are trying to secure industrial control systems.


Stuxnet
I have to give the ICS-CERT credit that at least they listed "timely information sharing" as a key lesson learned. The responses, both timing and content, of ICS-CERT were less than desirable during the initial "panic" period of Stuxnet. The first document they released, was only released to a limited "private" audience, and then, what little information that followed seemed to be straight cut-and-paste from other research material like that of the very impressive Symantec team. We later had a rather insignificant alert relating to the WellinTech KingView system out of China. The first ICS-CERT announcement released January 11 states "ICS-CERT has not yet verified this vulnerability". This seems rather odd when a patch for this vulnerability was available on the vendor's website December 15, along with a short paragraph talking about the vulnerability. This was on the Chinese language website, but since this is a system that is pretty much localized within the China market, this seems like a logical place to look. It was also later posted on their English website as well.

Fly-Away Teams
I think that this concept is brilliant, and is past due. When I worked for one of the major ICS vendors, we had what we called "Tiger Teams" which were highly experienced and knowledgeable teams of engineers and technicians that could be sent to customer sites to assist in troubleshooting complex matters during times of new product installations, major software upgrades, etc. The one thing that puzzles me about these "Fly-Away" Teams is that we are having trouble finding out how CSSP is staffing these positions. Numerous discussions have taken place on social networks of individuals that have shown interest in becoming part of these specialized teams, yet it seems difficult to find any information as to how to participate. I myself am also very interested in these teams, since I am a control system engineer first, and a security specialist second. It seems logical that these teams should be a balanced blend of both InfoSec gurus as well as those that have the knowledge of the underlying systems and would know how to compromise a system using non-traditional methods. Stuxnet revealed significant vulnerabilities in both the processes used to secure control systems, as well as the implementation of the products that are installed within the systems. With the enormous amount of ICS components available from hundreds of vendors, any team should address both aspects. What I am trying to avoid is a team of expert InfoSec professionals that come in and think of the problem from a "Black Hat" perspective. My approach is to focus outside the traditional boundaries of the control room or rack room from a "Hard Hat" perspective.

Responsible Vulnerability Disclosure
This will be a hot topic for quite sometime. Having "provoked" numerous discussions on this at various forums, it is a situation with no simple nor easy answer. On one side, vendors needs to protect their customers from unnecessary risk resulting from exposure. On another side, we need to make sure that customers are allowed to manage their own risk and are not dependent on someone else to manage it for them (I find it hard to believe that a vendor is able to notify 100% of their customers when they uncover a high risk vulnerability). And finally, someone needs to provide vendors with an "incentive" to acknowledge and correct the vulnerabilities that have been discovered. Not all vendors are the same, and some of them are very responsive to investigating and correcting vulnerabilities. Others ... well ... let's just say they prefer to either ignore them or send their legal staff over to have a refresher on disclosure of confidential information. This is the reason I believe many of the private researchers just give up, and disclose publicly. Having worked as an owner, vendor and integrator, there is one problem that will always be my lead-in argument - many vendors do not provide notification of security vulnerabilities or allow you the ability to download security patches unless you have a current support contract. In my opinion, this is wrong. Why should those of us that use these systems have to pay for patches that correct their own bugs? I would be willing to say that this will change when a cyber event results in a secondary event that leads to litigation. The Olympic Pipeline incident was the result of a system bug ... should users have to pay to be informed that their system has a bug that needs to be patched???

And then we have US-CERT/ICS-CERT. As an independent researcher responsible for designing, implementing, testing and auditing ICS security, it is just a bit frustrating that information seems to become "classified" after it is handed over to CERT, or it is placed in "restricted" portals that are limited to owners or vendors. This has never made any sense to me, as I have been involved in automation projects for a very long time, and in general, the engineering contractors or system integrators are a key piece of the project lifecycle. It seems that since a large percentage of problems can be correlated to configuration problems, and that the configuration of these systems is performed by engineers/integrators, that it is logical that they are created as equals in a problem that faces everyone who works with control systems. We cannot forget that not all owner-operators perform their own security management, and this means that there are many other parties that directly need to know up-to-date information regarding issues around ICS security.

Use of USB Drives and Removable Media
This topic touches home, because within the first week of Stuxnet's discovery in mid-July, I developed a mitigation strategy based on the use of Microsoft's Software Restriction Policy which prevented the exploitation of MS10-046, without completely disabling your desktop as suggested by Microsoft, Siemens and CERT. (A video has also been posted on YouTube.) This mitigation was presented to ICS-CERT at the Fall ICSJWG conference in Seattle, and has still not be acknowledged as an effective approach. Statements like "establish strict policies for the use of USB thumb drives ..." is just too vague and really does not say anything. Should these be administrative policies or technical policies? I will be releasing a new demonstration video that expands on SRP, and shows how you can implement a Group Policy (in either a workgroup or a domain) that will limit the use of USB drives to specific accounts.

Some Additional Thoughts
It is not enough to simply trust your vendors and integrators that the systems they install are secure. You need to test these systems in a safe manner - I recommend a rigorous test during FAT/SAT, and again during maintenance outages - and make sure that the low-hanging-fruit is corrected, and that obvious vulnerabilities are patched. Most security vulnerabilities are the result of implementation/configuration errors, and even if you use the most secure products that some organization can test, you need to make sure that telnet has been turned off on the switches, that default user credentials have been removed/modified, and that all approved patches have been installed. Don't forget to perform a COMPLETE VULNERABILITY ASSESSMENT! If an appliance has a CPU and an operating system, it is vulnerable! Don't forget to test devices like switches, firewalls, network-connected printers, etc. as they all have their own vulnerabilities that can be exploited to gain access.

I also am in favor of the use of intrusion monitoring systems within the control system network integrated with a security event monitoring package to collect and correlate all of the log information generated from hosts, network appliances, etc. One very effective way to implement IDS within the control system environment is not to perform deep packet inspection and content analysis, but to perform network behavior analysis that is based on normal traffic patterns that can alert the SIEM when anomalies occur signifying the possible early phases of an attack. When a control system server that is normally unmanned attempts to communicate with the Internet, something is very wrong!

Finally, if you have never seen a demonstration of a control system cyber attack, I encourage you to consider having a workshop where you can educate people on cyber security and the risks that are present within the ICS environment. You can find additional information on my website at http://www.SCADAhacker.com/services.

No comments:

Post a Comment