CompTIA PenTest+ Chapter 5 : Reporting and Communication
Table of contents
5.1 Given a scenario , use report writing and handling best practices. |
---|
5.2 Explain post-report delivery practices. |
5.3 Given a scenario, recommend mitigation strategies for discovered vulnerabilities |
5.4 Explain the importance of communication during the penetration testing process |
5.4 Explain the importance of communication during the penetration testing process
I know that I’m starting this chapter with the last section, but communication is of the most important things a penetration tester (and his/her team) can have with the client. Remember why we are hired - to find any flaws in the organisation’s security posture within their scope, budget , industrial regulations and standards". Such balance was first articulated through communication and this is a continuous process. During the test we will find all sorts of things which we must alert the client to - maybe there was an indicator of compromise somewhere !
Reasons for communication
A mature penetration testing team will keep in regular contact with their client, updating them on their progress which gives both sides an awareness of the direction of the project. They know that we’re in line with the rules of engagement as specified in the SOW , and we’re aware of the client’s expectations. Regular communication would also make it easier to avoid conflict - if our nmap
scan was too thorough and taking quite a long time that it bleed into peak hours of business activity then it may reduce their ability to handle all this demand. To resolve the IT department can notify the penetration tester(s) of this surge and pause the scan. Some situations may see escalating tensions between the two sides, which can be increased if there is no trust between them. Progress reports may be necessary for larger undertakings, but other types of escalation can arise if say there is found to be a virus on a C-suite member’s computer. With good communication we can handle such an array of high-intensity moments as we get people organised and oriented to a solution. But this obviously presupposes we have answers , some way of guiding each party beyond this bump in the road. Oftentimes, much of communication doesn’t guide us anywhere and isn’t indicative of any answers to just exactly how we progress, so whilst there are strong reasons to establish paths for communication you must know what exactly should be travelling through these channels.
Communications Paths
This brings us onto establishing communication channels with the client, which should be specified before the test takes place obviously… When I say communication paths I also mean what relation do we have to each member of the organisation ? We wouldn’t spill all the secrets to an intern but we would release more information to an executive. The mediums of communication may be a lot sparser for those in the IT department as the data is confidential - even though they will be the ones enacting the repairs. In response to requests for information the penetration testing teams will say : “Our contract requires us to keep the results confidential until we release our final report to management, except under very specific circumstances”. So, we’ve determined all the players in and amongst the organisation and how much information we will be allowed to give to each one, we also need to determine how often we send information over these channels . The most frequent exchange is with the client and we may need to provide periodic status updates. One common way of achieving this is through standing, in-person meetings with key stakeholders and penetration testers, allowing them to discuss outstanding issues and provide updates on the rules of engagement if necessary. To determine the frequency of communication between each group then we can see how much influence each group has over the proceedings and to what degree they can the specs and direction of our test : the client being the most , and the interns and offshore departments having the least…
The length of the entire engagement will also help in determining the rhythm of communications. A general rule of thumb is that the shorter the engagement , the more intimate and frequent the meetings as smaller engagements have lesser scope, less to do and focus on critical infrastructure usually. Such things then would require daily briefing; whereas for longer assessments which are more thorough and time-consuming it could be a once a week meeting with the client to maintain that situational awareness etc.
Communication Triggers
Once we have specified paths, we have known rhythms and all the different ways in which we communicate with the client, stakeholders and other organisational staff it is time we identified all the possible happenings which would be cause for communication - a communication trigger. Oftentimes these triggers could be outside the normal schedules but not always. The Statement Of Work (SOW) may also specify other triggers than the ones I cover below…
The first trigger I would say is the completion of a testing stage. The SOW outlined by the client and penetration testing team would have articulated key milestones which demarcate each stage of testing, moreover when one stage has been completed we can notify management as we then move onto the next one. This could a trigger outside of communication schedules if this was a smaller, quicker test as we have a lot to cover in that time frame; a longer test however would probably have this milestone included in the scheduled meeting as there would be stricter rules on moving through stages so quickly in larger organisations.
Discovery of a critical finding. If a test finds a critical vulnerability we cannot wait for the release of the report to inform the client we must communicate it to them quickly as it puts the organisation at an unacceptable level of risk which no company would accept. Such emergency scenarios would be covered beforehand and there is a procedure for the tester(s) to notify management without leaking the information , having it spread in-house and outside the company.
Indicators of compromise would be the last main communication trigger. This is the most dangerous , as we have only found an indicator we haven’t gone through the incident response process to check if they are still on the system. We need to inform management immediately so they can go through their investigation of the compromised machine.
Goal Reprioritisation
If we do find triggers, like an IOC - then it may change the nature of the test completely and the client would want to reorient themselves as this is more serious and affects critical business processes. Depending on the subsequent meetings between the penetration testing team and the client the tests could be postponed , or the scope is altered so that the team assists in the incidence response process.
Outside of reorientation due to triggers, it may be that during the test itself - in this example both a network and web application pen test - that the network side of things is incredibly secure from the offset whereas there seems to be a lot of potential in the web applications; moreover, the team could discuss this in their next meeting and request a deviation from the original plan to pursue this line of inquiry and lessen the search in other vectors as they believe its security to be good. Once the group of stakeholders and the client are in agreement then we have an updated Statement Of Work.
5.3 Given a scenario, recommend mitigation strategies for discovered vulnerabilities
So , we conducted our web application penetration test and we found all sorts of vulnerabilities ranging from technical ones like poor sanitisation of input to people problems such as leaving passwords on their desktop sticky notes which if we used a metasploit
module and captured those credentials we can get into other accounts …
The three main categories that our report will try to improve are:
- The people category. If we can just get people to stop falling Phishing attacks so many breaches wouldn’t happen, but in fairness the attackers are getting smarter.
- The process category. This is looking at the business functions from a bird’s eye view - from input to output. If there is anything in the architecture chain which could lead to exploitation then we need to remove a process which is vulnerable, add a few wrapper processes which check the security of the output of the last function (the element in the chain before this). A good business will need to have chains which are rigid for the sake of security, but the interface which backdrops them and the infrastructure which supports this project should be fluid for the sake of having a good business - to change and develop new products and features where possible - so the security modules should also have an element of abstraction to improve the minimum standard across products. There should be no “alternate story-lines”. For example, there should a be a bullet-proof process for backup media, and someone shouldn’t be able to just walk in and take one of the media off someone’s desk or from the backup room. Other things may be the management of IDs ,the creation of new accounts , applications etc.
- The technology category , where I mentioned an example at the very start. We look at the controls the business has in place for the hardware and software in use. Do they need updates ? Do they need replacing ?
These are the categories that we shall keep in mind , whilst teams will choose to structure the report however they like these principles will be the same regardless - ultimately we wish to provide the error itself and the remediation for it. They can create categories like “Applications”, with the sub-categories, Web , Database, Desktop etc. Wherever they tick boxes - but specifying categories has two main considerations: How much the company has skewed their infrastructure and their products into a particular direction i.e are they mostly creating iOS apps or do they have very few apps at all ? At the start of our report writing we will demarcate the different sections of infrastructure - and depending on the test get more information about sub-categories etc.
The most common findings searched for in a given infrastructure (and ones you should know for the exam) are :
Findings | Remediation |
---|---|
Shared local administrator credentials. When more than one computer owns the password it makes accountability harder to discern. IT teams often use a single default password for all of these accounts to simplify administration. Penetration testers and attackers know this and often key in on those accounts as targets for attack. It may be better to use a password management tool which has a unique password for the local admin on each machine in the domain instead… | Randomise credentials and setup Local Administrator Password Solution (LAPS). This is Microsoft’s answer to having domain admins storing too many passwords. This tool stores and manages the passwords in AD, which map passwords to computer accounts. |
Weak password complexity. Giving all the responsibility to the user to choose their password without any requirements is insane as they just want something memorable - but they jeopardise the organisation ! They become easy to brute-force, whether against live systems or a captured file. | Remediating this vulnerability is simple, as we can set technical policies on a Windows system that enforce the user to adhere to the password length and complexity requirements. |
Plain text passwords. Even if the passwords are mightily complex , what’s the point if I can read them ? You can find these on development servers, but the credentials are still used in production - for the database and/or admin account etc. | Always store passwords in encrypted form if the data needs to be accessed or interpreted over and over, or hash it - this works perfectly for accounts as we only need to compare hashes and not content. |
No multifactor authentication. The reliance upon passwords often constitutes a security risk and we need something more than just that knowledge-based authentication. We don’t just want to authenticate with something we know, but something we have, maybe something we are or do. Something I have could be a keyfob or an app that emits OTPs for me to use. Something you are is to do with biometrics and something I do would be something idiosyncratic like the way I type or write my signature. | By combining factors from two different categories we make it exponentially harder for an attacker to breach our account. Something like a password and PIN which are both things I know does not constitute MFA. |
SQL Injection. Any application that has a backend with an SQL database AND they have poor sanitisation of input will be liable to this attack. They can extract customer details, admin information, start shells and even modify the database itself in the worst case. | Given the requirements for this attack we can establish two recommendations: the first being that we can sanitise user input on the frontend and try to pass to the server the cleanest queries possible. Obviously attackers can bypass this and supply all sorts of fudged data , but this is where the backend defences come in. We can also parameterise queries: this means not concatenating the strings but escaping them and then joining them. |
Unnecessary open services. Sometimes applications start all sorts of services, or maybe there is a default service that the admins have thought to be harmless and never closed down. Services can also linger longer than they should and people can attack things like ftp , ssh etc. Worst case scenario is that these services are running as root, so if they’re compromised it can mean big trouble! | Unnecessary open services can also mean unpatched as they aren’t taken care of, meaning it is critical that admins harden the system. Adopt the idea of least services, repeat this over a periodic basis and reconfigure devices as business needs change. |
So , that’s all the stuff you should be listing under “common findings”, but spell it out for your client - remember to be clear and concise!
5.1 Given a scenario , use report writing and handling best practices.
Now then, we have an idea of what will be going in the report seeing as we may encounter these findings. Remember the target audience , how technical it needs to be as we ultimately want to make it clear to management the roadmap to progress and for technical staff the key attributes they must look to repairing when patching or replacing hardware and/or software.
This report will often be one in a timeline of reports and will used to gauge the company’s progress over time, this is where the company may have a preferred template they want used as they can map all the different fields such as “Web Applications” and whether or not they have grown more secure over time.
Normalisation of data
So , the test is finished and we have all these screenshots, scripts, nmap
files, text files with configuration data , IP addresses and all sorts - a lot of which has some overlap which we need to comb through. This process of clearing up, filtering and removing redundant data is called normalising data. The aim is to improve the integrity of the data , to determine what is relevant. When testing we have a lot of work done on targets that may not even be vulnerable , and so all we would say to them is “all clear”, but depending on the client they may want any and all information collected on any tested hosts, not just the vulnerable ones.
By breaking down the data into components we can then begin to understand what needs to be shown to each target audience. We wouldn’t show the executives the nmap
scans for example, though a person from the IT department may want to peek into the appendix. By formatting data into a common language and removing the fat we make the report generation process easier, the appendix easy to understand - as we have broken it down into categories - and this makes subsequent analysis more straightforward (for the tester and the reader).
The kinds of data that may need to be analysed include:
- Addresses of assets with findings
- Assets assessed during testing
- Finding data
- Scan results and recon data from different sources or scan types
- Testing steps conducted, including timelines and chains of attack
- Catalogued evidence, including screenshots, photos, or other tangibles
- Payloads used, tool configurations used or other environmental setups, and rule sets used
- Control evasions
- Changes to targets made during testing
Not all of this information will need to be included in the report. However, it is important for the tester to analyse these data points to ensure that the testing has been conducted thoroughly according to the scope and testing requirements.
Written report of findings and remediation
This is the general structure of a penetration testing report, which may be incredibly detailed in one aspect and brief in others - it depends on the test itself, the findings and what the company will be prioritising when they read through it.
The first section in a report is the executive summary. This should hopefully fit on a single page, as it is a non-technical reviewing of all the findings - as the target audience is C-suite executives and other non-technical stakeholders. First we’ll present the background : Why testing was conducted in the first place - what particular aspect of the infrastructure was tested, and if it was a black box why was that commissioned ? We may also note the testing window, the testing types conducted and goals. If it was a compliance assessment then we should make note of it, also if there was any incident or whether things were successful. With this we can give a general statement of the company’s security posture , focusing on strategic concerns - risks to the business.
After this will be the methodology. This will separate the men from the boys in terms of penetration testing competence. A skilled security professional should be able to understand your testing process and go the infrastructure and get the same results (presuming it’s in the exact same state). A good penetration testing team will now how to breakdown any complicated steps into simpler sections, and moreover they have a better idea of what aspects of the system need to be scrutinised. The executive summary told the reader what we are looking to test and why, now it’s time to guide them through the journey. This section effectively states how testing was done , the process of finding targets - keeping to scope and the rules of engagement - and then once we’ve found assets how we tested them. It is important to note anything which was left out , in keeping with the SOW , or maybe it was a time issue etc. Moreover, the penetration testing team should highlight any technical constraints, scope limitations, rules of engagement, the types of testing done and the target infrastructure. The amount of code should be kept to a minimum and it should be annotated and accompanied by explanations of the purpose of that code, in which it helped you find a particular target , or reach a critical milestone - if it is code that could be harnessed by some low-level threat actor like a script-kiddie this should be mentioned. For extensive scan reports, code blocks etc these should be put in the appendix - by which we include the link in the body of the report.
Findings and remediation. Simply put it is all the evidence of any “pressure points” in the company’s infrastructure that we found during testing. This section is the meat and potatoes and we can be as technical as we want - but keep it straightforward. Illustrate what these weaknesses were - let’s say an SQL injection vulnerability on one of their websites, and articulating the technical faults which led to this and how it can be remediated. We should show screenshots where applicable - clearing out any potential data that was exposed - of the attack and any code / scripts which were used to get that result. Essentially this report is mirroring all the phases of the penetration test, first we did basic footprinting and establishing the purpose of the test, second was the methodology and the path to enumerating the target and here we are deep in the trenches - but instead of finding exploits we’re proving that they work. Our recommendations for a given issue has to be relevant to the company. Take our SQL injection example again, if the website in question was within an air-gapped internal network by which their network defences are very high, then we would scale down the our severity level , as relatively speaking it isn’t a critical thing to fix. For each vulnerability then we have to provide a relative security level, and the best course of action to resolve it.
Next up is the metrics and measures section. At this point we have all our findings, but we have only given a relative ranking , otherwise spending most our time looking at the vulnerability itself. In this section we will be looking to determine the severity of vulnerabilities in their own context - which can be calculated using the CVSS score. Just slapping the score next to each title doesn’t solve anything , we should explain why this vulnerability is a critical rating or not, which is done by considering these factors:
- Vector of attack. Physical, web-based, network-based ?
- Ease of exploitation. Is the script freely accessible , do we have to chain exploits together to get to this stage , do I need to be on the computer to execute it ?
- The access level required, the amount of data that could be exposed, value of the asset etc. Which of the CIA triad does it impact the most ?
The measures and metrics section will hope to detail:
- The number of findings.
- The number of assets tested.
- The findings relative to strategic impact, as well as their objective score.
- Criticality of findings.
Depending on the report we may see that the metrics and measures comes before the findings and remediation just as a summary , articulating the weight of the report itself before diving straight into the detail.
The conclusion. This is about thinking towards the future and providing some recommendations for future steps, what the company can do in the immediate time after the test has been finished to strengthen their position. For example, if the test for solely network-based then you may recommend the client review other parts of their infrastructure as well. The conclusion will reference all the other points in the report - we may highlight critical metrics , effectively saying - “remember this, yeh this has to be fixed as soon as possible!” The conclusion aims to be expert advice, and so we should make recommendations that take into consideration the risk rating and the organisation’s risk appetite. Whatever the appetite isn’t ready for is probably the thing we should put our energy into.
Secure handling and disposition of reports, storage time for report
Can you imagine if a nefarious someone managed to get their hands on our report after we did all the work breaking in ? All the sensitive data they could amass would be terrifying … Things like the methodology, which are the incredibly thorough and technical steps to exploit a particular server cannot get into the wrong hands and it must be transmitted and stored in encrypted form. Paper copies should be kept under lock and key. Any digital and paper copies of reports from years and years ago should be securely destroyed (outside of any potential law imposed on the company to aggregate records). While the client may choose to retain a copy of the report indefinitely, the penetration testers should only retain the report and related records for a sufficient length of time to answer any client questions. When this period of time expires, the report should be securely deleted.
5.2 Explain post-report delivery practices.
So those would be the recommendations we put in a report , but what about the post-report activity? Well, all the work and files we have done on the computer - such as removing logs - have to be reverted so it’s like we never even touched the system.
During the engagement remember to document all changes you made , to which computer , when and where so we don’t run into trouble. The exam lists the main jobs of cleanup which include:
- Removing any shells, so scripts which were used as reverse shells on the system.
- Remove tester-accounts, backdoor accounts, malware and/or exploit code.
- Remove any tools we used on that machine - like
mimmikatz
.
The exception to this rule is that testers may have made emergency changes to assist with the remediation of critical vulnerabilities (one of the communication triggers). If this occurred, testers should coordinate with management and determine appropriate actions.
Client Acceptance
Once the client has read our report and they have seen we haven’t overstepped the scope, they are happy with the depth and work conducted then they will provide written acknowledgement of our final report, but it more typically includes a face-to-face meeting where the testers discuss the results of the engagement with business and technical leaders and answer any questions that might arise. Client acceptance marks the end of the client engagement and is the formal agreement that the testers successfully completed the agreed-upon scope of work.
Lessons learned
Aside from the lessons that the client and the technical leaders learnt from the report and in the subsequent meetings, what lessons did the penetration testing team learn? We can always learn something about an engagement, some new tool that we found along the way which will make Powershell
-themed exploits easier next time round. This will make future engagements run smoother, and possibly quicker allowing greater depth to be achieved. It’s often helpful to have a third party moderate the lessons learned session. This provides a neutral facilitator who can approach the results from a purely objective point of view without any attachment to the work. The facilitator can also help draw out details that might be obvious to the team but would be helpful to an outside reader reviewing the results of the lessons learned session.
Follow-up actions / retesting
In some cases, the client may wish to have the team conduct follow-up actions after a penetration testing engagement. This may include conducting additional tests using different tools or on different resources than were included in the scope of the original test. Follow-on actions may also include retesting resources that had vulnerabilities during the original test to verify that remediation activities were effective. The nature of follow-on actions may vary, and testers should make a judgement call about the level of formality involved. If the client is requesting a quick retest that falls within the original scope of work and rules of engagement, the testers may choose to simply conduct the retest at no charge. If, however, the client is requesting significant work or changes to the scope or rules of engagement, the testers may ask the client to go through a new planning process.
Attestation of findings
This is where the tester, if they are performing a compliance test then the client will want to prove to the regulation authorities that the pen tester is indeed following through on commitments and checking critical infrastructure - I don’t mean they show the report, the level of detail in their attestation of findings to the client will vary : from a short letter confirming the client engaged the tester for a formal penetration test , or it may need to state a listing of high-risk findings along with confirmation that the findings were successfully remediated after the test (this sounds pertinent to healthcare systems).