What is red teaming?Â
When people talk about red teaming, they are referring to quite an aggressive form of penetration testing. It is the nuclear version of a penetration test. With a vanilla pen test, the idea is to make the client aware of the problems and related system vulnerabilities. Measures are then put in place to mitigate any vulnerabilities identified.
What is atypical about a system pen test is exploiting those vulnerabilities. It is often too risky to go through with an exploit, and there is no guarantee that it will end in the way you think it would. E.g. if you try to exploit a remote command execution on a system, the expectation is for you to take control of the system cleanly. However, when you exploit a system, it does not always render a dependable result. It may react unexpectedly. You may end up crashing a live system or polluting data or damaging data that sits behind the application that you are testing. Â
The average system pen test is not a complete replication of an actual attack. Things like social engineering, spearfishing, planting USB sticks with malware in a car park, and delivering malicious gifts (like a credential harvesting mouse), are not tested. In a red team test, you go the whole hog, and all the vulnerabilities are exploited. You will, e.g., sit in the car park of the organisation and capture wi-fi handshakes and try to crack them by using GPU systems online. The organisation must agree to red teaming activities as it is far more aggressive, but for testers it can be a lot more fun. Pen testers can get all their toys out and really go to town! Red teaming is quite ferocious and expands across the entire organisation, and so you must ensure that the correct caveats are in place before commencing. Â
What is the goal of the red teaming engagement?Â
The goal is to try and compromise the systems in whichever way that you can. It is important to determine the security culture of the organisation and use any means of exploitation, be it against people, process, or technology. Whatever we may do, we push it through to the other side and see how far we can take it. It is not a simulation – it is an actual attack to uncover and exploit vulnerabilities. Â
What is blue teaming?Â
Blue is on the other side of the colour spectrum, hence more defensive. Blue teaming concerns the processes and detection/prevention tools you put in place to protect yourself against an attack. It relates to network intrusion detection, log analysis, system information, and event management. When analysing a log, you need tools to detect when someone is logging in or has failed to log in. The logs record activities from systems, including firewalls, mail servers, and desktops. These activities are aggregated into one spot, the information is processed, and alerts are created to notify you of suspicious activity. Security Information and Event Management (SIEM) systems are an integral part of a security operations system (SOC), where you have teams of people monitoring systems to find malicious activity. Â
Network intrusion detection systems work at a packet level (packets going into and coming out of the network). The traffic is mirrored and aggregated into a single point and then examined by a system to see whether the packets contain indicators of malicious activity. You may look at the amount of traffic or the actual content of the packets themselves. Once the vulnerability is found, it can be exploited.
Sometimes the actual payloads themselves sit within the packets traversing the network, and you can look for them using network intrusion detection systems. The outputs can be fed into the logs of your SIEM. Endpoint detection (e.g. Bit Defender) software sits on the individual hosts and servers and notices when potentially malicious activity takes place. These log feeds can also be sent to a SIEM. Endpoint detection, network detection, and SIEM systems are the three key tools that are used in SOCs and are blue technologies. Â
What are the most significant contributors to the success of the systems?Â
It is a combination of the technology and the people that are using it. Expertise is essential when it comes to the analysis of data. Endpoint protection is very effective, particularly with log analysis. And it generates a lot of data. Machine learning can do some of the heavy lifting when it comes to determining whether data is malicious or not; however, as every organisation is different, it is ultimately down to the internal expertise to review files and decide about information security. You need to tune the systems to look for the key indicators; otherwise, it will be like looking for a needle in a haystack when considering all activities. And tuning the system is a skill in itself. Â
There is no setup and go solution. Observation is key, but you require expertise for this. There are amazing tools available to assist, but you have to have the right people working with these tools. Â
Endpoint protection – it is always the last place you least expect. Do you agree?Â
Network intrusion detection systems have fallen out of favour recently with the big push towards SIEM. My personal view is that it is a mistake. When you look at the recent success of Darktrace, they are examining network traffic, and it is extremely beneficial to review the traffic on your network. Organisations try to record events as soon as they occur to contain the breach and get systems functioning quicker. With a network intrusion detection system, you can see evidence of intrusion earlier. One of the disadvantages of this system is the amount of processing power and storage required for full packet capture. But you can put them in strategic positions, e.g. right next to your crown jewels, and restrict it to only look for attacks against that. Personally, I view network intrusion detection systems as critical. Â
What is the significance of a purple team?Â
When you mix red and blue, you get purple so this is a joint red and blue test. When you perform your red team activities, you have to let the blue team know. However, how much you disclose to the blue team is part of the scoping exercise. Some organisations prefer not to divulge too much to the blue team about planned attacks. There will always be some warning; the red team will perform their attacks, and the blue team will try to detect what is happening. In the end, there will be a team huddle about what was discovered vs what was overlooked. Considerations will be put in place for everything that was missed, and preventative measures will be implemented to guard against future attacks. Â
This does not only apply to systems testing. The blue team must guard against people wandering into the building when they should not be. The blue team will receive a nod, but the extent of the nod depends on the organisation. The ultimate goal is always the same, try to identify how effective detection is when someone is attempting to access data and intellectual property. At the end of the process, we decide if it is necessary to install new processes and systems and/or learn the ones already there better. Â
Who should dictate the engagement? Red, blue, or purple?Â
The instigators of the trouble are the red team. The blue team reacts to what the red team does. The sign of a good blue is when the team predicts what is about to happen next based on the pattern of the attacks. This is where expertise comes in. The scope of the engagement has to be an agreement between the company doing the red teaming and the client. The red will get permission to do what they want, but there needs to be an understanding with the client of what the risks are. The rules of engagement need to be decided by all involved parties. Â
When a client wants the engagement performed in a certain way, will it sacrifice its effectiveness?Â
Possibly, but there is a bigger picture. There is information that the client has that we may not have. They may understand the risks to their organisation in a way that we cannot. There has to be a knowledge transfer both ways between client and organisation. And there has to be an agreement between both parties. We do not step out of scope. Ever.
- Â
Please note: This article has been transcribed and summarised from our podcast of the same title.
Â