3,325 views
BruCON 2010 : Day 0x1
After hearing a lot of great things about the first edition of BruCON (in 2009), I decided to attend the con this year. The fact that BruCON is gaining popularity and established a lot of recognition in the industry already, combined with the fact that it takes place in Brussels, Belgium (my home country), it made this decision a no-brainer. On top of that, the awesome line-up and attractive (read: low) pricing model make everything complete… after all, pricing is based on a “pay what you can” model, making this seminar accessible for everyone.
The BruCON website states :
“BruCON is an annual security and hacker conference providing two days of an interesting atmosphere for open discussions of critical infosec issues, privacy, information technology and its cultural/technical implications on society. Organized in Brussels, BruCON offers a high quality line up of speakers, security challenges and interesting workshops. BruCON is a conference by and for the security and hacker community.
The conference tries to create bridges between the various actors active in computer security world, included but not limited to hackers, security professionals, security communities, non-profit organizations, CERTs, students, law enforcement agencies, etc….. “
During the conference, BruCON also hosts another episode of “The Hex Factor”, a contest for experienced professionals and less experienced people can test their skills. Winners are awarded with some nice prices (amazon gift certificates, t-shirts, etc). Excellent initiative, it made it hard for me to choose between attending the cons and playing the games…
Anyways, over the next 2 days, I will publish my reports, impressions and conclusions about the talks and workshops I attended at BruCON.
What is even more interesting is the fact that I will be able to meet some of my friends again, and make some new friends !
Fasten your seatbelts, here we go.
Keynote : Memoires Of A Data Security Street Fighter
Mikko Hypponen (F-Secure) kicks off the conference by taking us on a journey through the history of the security industry. Back in the early days, all known PC viruses still fit on a floppy disk. Today, the number of viruses/trojans/worms and the impact these pieces of code have on the world has increased exponentially.
Brain (°1978), one of the first PC viruses/rootkits (‘stealth viruses’), spread via floppies (physical transport, spreading at the speed of travel). Nowadays, viruses use all current media (internet, etc), and spread at the speed of light. Mikko explains that there are still viruses that use physical media to spread (USB sticks for example).
Since that date, many viruses were discovered (and Mikko was one of the pioneers who got into the business of reversing viruses (and naming them)). Up to that point, viruses were annoying, but not really destructive.
Then, the Michelangelo virus was discovered. This was the first virus that was written to really cause damage to computers. Almost at the same time, viruses were written to show graphical animations on the screen (V-Sign, Happy Birthday Joshi, Casino, etc).
Annoying, funny, destructive… name it the way you want. Either way, viruses were very very different at that time. They infected DOS and the early versions of Windows.
Then, the concepts changed. Instead of infecting an operating system, virus builders started targeting applications (Word, Excel… macro viruses). It even made Microsoft decide to break their backwards compatibility support for macro’s, to get rid of the issue.
Up to this point, there was still no money involved. Virus developers gathered, formed underground groups, writing viruses for fame, not for profit.
Happy99 was the first email outbreak ever. While it was still harmless at that time, it was the start of a new era for virus worms. Email based worms continued to be the main vehicle for viruses until a few years ago. Worms started to get worse (actually taking files, potentially confidential data, and sending it to other people in the address book). Because email filters got mature in the meantime, we eventually got rid of email/attachment based viruses.
Virus writers found a new target in social media. Social media = social networks = a lot of people connected to each other. People who are likely to trust each other and easy to convince to click a link in a wall post or personal message.
Virus writers stepped it up a little but, viruses improved (Nimda, emails impersonating as if they were coming from Microsoft, etc) and started attacking and infecting machines/servers over the network (Code Red, attacking vulnerable versions of IIS webservers), Slapper, Slammer, Blaster, Sasser… These attacks did not require human interaction (except for cleaning up the mess afterwards), and resulted in a very fast spread across the world. While this does not sound really spectacular at this particular point in time, it was a revolution at the time. Personal firewalls were not very common, and these viruses were specifically written to scan huge IP ranges and quickly infect other machines. On top of that, some of the viruses actually prevented people from applying patches (because machines rebooted before the update could complete). Microsoft responded with XP SP2 (which had firewall enabled by default, etc).
All very annoying indeed, but think about the fact that not just individuals and small companies were hit. Big companies (hospitals, flight operators, nuclear plants, railroad companies, etc) had to shut down operations, causing major issues for a lot of people.
Then Fizzer came out (7 years ago). This was the first case where a virus was trying to make money. The only thing it really did was send out spam… nothing spectacular if you think about it today, but at that time, it was a new threat, another conceptual change. Most viruses were written in Russia, Southern-America and China.
Moving on, there was Sobig, Witty Worm (abusing a remote exploit in a firewall product), Netsky, SDDBot and many others.
Cabir was the first virus infecting mobile phones (Symbian, 6 years ago), using bluetooth as a transport mechanism. Some of these viruses were very clever. They were specifically designed to make sure the end user would accept the bluetooth message as long as the device was within the range of the attacker. Vendors had to change the way bluetooth works (and how “yes” and “no” answers to bluetooth messages are cached on mobile phones).
Mebroot was the first virus which used websites to infect visitor computers with malware/a rootkit. It used the official website of Monica Bellucci to spread and surprised many anti-virus researches because of the techniques used to spread, infect and operate. It even included a technique that would take crashdumps and send them back to the attackers so they could improve their rootkit and prevent future crashes on other computers.
It became clear that viruses and rootkits got big business. People could now start using infected users for profit. Imagine that the virus encrypts or corrupts files on your computer, and will ask you to buy (genuine or fake) software to fix the files… The user thinks he has a harddrive crash. But in reality, this was nothing more than a hostage, with criminals asking for ransom.
Last year, the conficker virus spread across the world. Up to today, this conficker botnet still has about 10 million of computers under control. Motives for this virus are still not clear.
What is interesting is that Conficker spreads via USB sticks, using a clever technique to disguise valid commands in a – apparently corrupted – autorun.inf file. Windows just ignored the junk in the file and happily executed the valid commands in the file.
In january 2010, Google announce that they have been infiltrated via the Aurora attack (IE bug). (Espionage ?). Targets change, virus transportation vehicle change. Another good example of this are pdf files. This very popular file format became another vehicle for transporting backdoors/trojans/rootkits and hack into systems.
Stuxnet is probably the most important malware that was discovered in the last 10 years. It uses multiple 0day vulnerabilities (some of them were only discovered after a few months). It was written to target Siemens WinCC SCADA systems. The idea behind it was “If you control SCADA systems, you may be able to remotely control access gates, factory production lines, etc”. Doing analysis after the facts, some people actually discovered that some websites already report references to “using USB sticks are attack mechanism, etc”, long before the stuxnet worm was discovered…. So nobody really knows how long stuxnet has been out there and who’s behind it.
The last 25 years have been tough and rewarding (to a certain extend) at the same time for antivirus researchers. While they have the opportunity to analyse new threats, new exploit techniques, and try to help the industry, the reality is that we are all still at least one step behind.
It is clear that a lot of money got involved. Knowledge about new exploits, exploitation techniques, etc are worth a lot of money. Malicious coders can use their money to get access to more/better resources, can invest more time and get smarter. It is clear that we are fighting an entire (yet malicious) industry.
Not sure who will win the battle.
You Spent All The Money And You Still Got Owned…
Joe McCray explains that back in the days, pentesting was the shizz. It was not hard to impress. People ran a nmap scan, scan the planet, run Nessus, download exploits from rootshell, packetstorm, own the network, wrote a report, and get paid. The only thing customers had to do, was patch their systems, and life was good .
Today, everybody claims to be experts. Everybody has gone through their first round of pentests, learned how to run nmap, Nessus, etc themselves…
Most companies used the pentest reports and have put big $$ systems in place (firewalls, IDS, IPS, etc). Local people got trained to run Nessus or ISS. People took exams and got a CISSP certificate… but the truth is that their networks are still broken. Despite all tools and products, a lot of systems, (web) applications are still vulnerable. Companies still get hacked. The reason behind this is somewhat obvious. Companies tend to focus on products and solutions, but don’t change their mindsets or put a risk management / security policies process in place. They still use the same tools to almost blindly audit their network and measure the success of the audit purely on the fact that the expensive tools detected the portscan and stopped the most obvious attacks.
But a lot of companies miss something really essential. Hackers focus on concepts, finding ways to bypass technology and so-called intelligence. They don’t focus on fancy tools to get the job done. It’s all between the ears.
This means that techniques to audit networks have evolved and that the pentester should evolve as well… In fact, good pentesters should try to differentiate from semi-automated robot-like auditors, by throwing some creativity into the jar. They might still use the same tools as they did before, but they fully understand how they work, how they need to be tweaked and modified to get proper results, and spit out the important details. It changes the way pentesters (should) work.
Security auditors now have to face load balancers (web, dns, etc). Furthermore, from Joe’s experience, it looks like as if “every” company has deployed an IPS these days. So techniques to detect IPS systems are required and often require manual work (sending requests, looking at responses, find out if the IP gets blocked etc). Joe explains “If a request gets blocked immediately, it most likely is an IPS. If it gets blocked after a while, it probably is a IDS. If it gets blocked after a day or so, it probably means that there’s an admin looking at log files.”
Most IPS systems don’t decrypt SSL traffic. So why not try piping scans through SSL tunnels ? Or run scans through Tor.
Web Application Firewalls are a real torture to pentesters. A lot of WAF’s are sold to companies as a valid mitigation technique for web application vulnerabilities. Why fix the bugs ? Put a WAF in place !
Luckily, WAF’s are easy to detect. Send traffic that includes (‘ “ < ? # – | ^> ) and the WAF will probably tell you it’s a WAF. Alternatively, error codes, different encodings (utf, but also try encoding using unicode), etc might reveal that fact that there’s a WAF in place. A nice tool to help assessing web applications (more specifically detect WAF’s) is waffit/wafwoof. Also, specific WAF’s may introduce vulnerabilities… remember dotDefender from the OffSec “How strong is your fu” challenge ?
SQLNinja and sqlmap (both integrated with Metasploit) are also nice tools to automate certain attacks, but they might be a bit loud.
Filter evasion : in reality, there are still a lot of developers who try to fix server side bugs using client side filters. (Doh!). Even if some server filters are in place (such as filter input and strip out all non-alphanumeric characters), sql injection would still be possible using “2 and 1 like 1” – alike attacks.
Most IDSes use signature based detection methods. Classic IDSes use for example “or 1=1” as a signature, but fail to pick up “or 2=2” or “2 or 2=2%2D%2D” (again, try different encodings… try part in hex, parts of it in utf, etc – it might just work to bypass the IDS/IPS). As soon as the pentester figures out what encoding works, he can modify tools such as sqlmap/sqlninja and apply the encoding, effectively bypassing filters.
One of best web application security tools out there, to help you figuring out how a specific filter is set up, is PHPIDS. You can feed it your attack and it will try to determine if it will get detected or not. You can tweak and modify the input until the tool determines that it’s most likely going to bypass the IPS/IDS.
To cut a long story short, signature based IDS is a joke. It *might* tell you if a box is already owned, but it’s very unlikely it will detect a hack attempt conducted by a seasoned hacker.
The next generation attacks (ab)use… a user and a their. Hey, most companies still don’t filter outbound traffic, or force desktops to go through a local proxy, but allow laptops to connect to the internet directly when they are roaming. On top of that, hackers know the users are the “weak” link in the chain. Tools such as Metasploit and SET will help pentesters set up very convincing (spear phishing) attacks, and help tricking users into accessing an “evil” website that will allow the pentester to get for example a meterpreter session to that client. From that point forward, the pentester can pivot his way into the network (or leave a custom trojan behind that will gather information when the computer is connected to the network, and will phone home when it’s connected to the internet again)
Inside the network, things usually get worse. Most people only protect the perimeter but don’t care about the internal network. And even if they have deployed internal LAN protection systems, it might be a joke. If NAC / security solutions are based on MAC addresses, then it might be as easy as looking at the back of a computer or printer to get a valid MAC address to get access to the network. This is just one example to steal a valid IP / MAC addresses.
Some switches allow for autoprovisioning of Voice phones. So if the pentester can impersonate his linux box to be a phone, he might be able to hop onto the voice VLAN, and perhaps get access to the server VLAN from that point forward.
In some cases, companies may have deployed NIPS/HIPS systems on servers/… They will detect portscans etc… In those cases, “net” commands may be stealthier and will provide the pentester with relevant & important information. Think about it : hack one box (meterpreter), use priv + getsystem, disable av/hips, rev2self, run “net” or “psexec” commands, use pass the hash to get to another system, and move on:). You can use the “incognito” meterpreter script to get domain admin privs.
From the defense point of view, Joe wrote a few (1 to 2 page) docs about each attack technique and how to properly prevent the issues. Shoot Joe an email (joe@learnsecurityonline.com) if you want to get a copy.
I was very amused by the way Joe performed this presentation. He really managed to grab the audience, has a great sense of humor, and passed on the message really well, in a very entertaining manner. Good job !!
GSM Security – Fact And Fiction
Fabian van den Broek, from the Radboud Univerity in Nijmegen (Institute for Computing and Information Sciences), starts his presentation with some facts (from 2007) :
GSM industry made $600 billion in 2007, most part of revenue generated via SMS messages. 90% of the world population has GSM coverage. There were about 4,1 billion GSM users in 2007 (which is a lot more than users on the internet). Despite the huge amount of targets, and the fact that GSM is pretty old already, the number of attacks that were seen so far was relatively low.
GSM technology overview
Fabian explains some basic concepts about GSM and network topology. GSM is a cellular system, he says. A device will connect to the base transceiver system (BTS) that has the strongest signal. All BTS systems are connected to base system controllers (BSC). One BSC can control multiple cell towers. The BSC’s are connected to MSC, which contain most of the intelligence in the network. All MSC’s are interconnected again. One of the MSC is a GMSC (Gateway), which connects to land networks.
When a call is made, the call goes all the way up to the MSC (even if both the source and destination device are connected to the same base station). The MSC is, amongst others, responsible for accounting/billing, etc It keeps track of all visitors, calls, etc into a database (VLR). The GMSC stores information into its own VLR and stores some other info, unique to a GMSC, into a HLR. The HLR, for example, makes the translation between a phone number and the IMSI, which designates the sim card. When a mobile phone enters the network, it screams out its IMSI and gets assigned a TMSI (temporary) by the VLR. The HLR can query the VLR’s to find where a specific phone is location.
The IMEI number refers to the phone (but it can be changed). Finally, there is a secret key (Ki), linked to the IMSI. The key is stored in the sim card and stored in the AuC (Authentication module) in the GMSC.
So far, so good.
GSM security
2 main concerns when looking at GSM’s :
- Authentication (a lot of providers only care about this, for economical reasons)
- Encryption (preserving confidentiality if you make a phone call)
Authentication : A3, A8 (hash / session key) by the COMP128 protocol (was kept ‘secret’ by providers, but got leaked/reversed a while ago, and contains a lot of flaws). New versions of the algo’s were developed and implemented.
A phone knows its own IMSI, knows Ki, which A3 and A8 algo it uses. It communicates with the AuC, uses a challenge/response mechanism, and ends up with a session key. It can then use the session key to encrypt the communication.
Encryption : A5/0, A5/1, A5/2, A5/3 : 4 cipher algo’s
- A5/0 : No encryption.
- A5/1 : Stream cipher. Still used in most western countries, but is old
- A5/2 : Stream cipher. Designed to be used in other countries, is a lot weaker than A5/0
- A5/3 : new cipher, was published immediately. Block Cipher
As a user, you don’t really know what kind of network you are connected to.
Encryption only happens between over the air, between the phone and the cell tower. The cell tower decrypts it, and then communicates with the BSC & MSC.
So, if you can acts as a mitm between the phone and the cell tower, you may be able to capture the call.
Attacks
Attack 1 : Eavesdropping (listening in on communication) :
- Capture packets (“bursts” in GSM terminology). Surprisingly, this is the most complex part of the process. Decryption was broken a long time ago, but it took a while to build an affordable way to actually capture packets. Keep in mind that the communication between a phone and BTS uses frequency hopping. So it does not use the same channel all the time, but will hop across channels. This idea was introduced because of QoS reasons (it’s better to have multiple avg quality channels than to have one bad channel), not because of security. The information on how this hopping takes place, is agreed between phone and tower. If the exchange about the hopping happens after encryption is set up, then it becomes hard to capture packets. The hopping algorithm is documented and known. All cell towers need to use the same sequence for compatibility… but in theory it might be possible to detect the hopping, although it has not been tried/published yet.
- Decrypt captured bursts : In 1994, A5/1 got reverse engineered, but at that time it was just an academic break. A few people decided to build a set of tables, which should facilitate a time-memory-trade-off attack, make the decryption real. But they had to give up after about 7 months. Now, there is a tool called “Kraken” which make decrypting the bursts more easy, using a table (Berlin set). In essence, Kraken guess the contents of a burst, compute the keystream and look-up the corresponding session key in the table.
- Interpret decrypted bursts : GSMDecode (AirProbe), WireShark, OpenBTS / OpenBSC
Attack 2 : MITM : cause a phone to connect you, and tell the phone to cipher in A5/2 (weak). If the mitm device would be able to hack the key before a timeout occurs, then you (the mitm) would be able to communicate with the GSM tower in an encrypted way. (This explains why you cannot use A5/0 between the phone and your mitm device, because you would never be able to hack the session key you need to communicate with the GSM tower).
In order to set up a MITM, you need OpenBTS/OpenBSC and OsmocomBB. Mitm would be possible, but it would most likely not go undetected. Some phones would even warn you about the lack of encryption. Last but not least, there’s timing issue (you need to be able to hack the session key in time)
The solution for this is to link OpenBTS to Asterisk (Good luck !). This way you can turn off encryption between the mitm device and phone, and send on the call to the mobile phone via Asterisk. You can only capture outgoing calls. This technique was demonstrated at BlackHat Vegas 2010
Some other attacks :
- IMSI catchers
- Attacks on other parts of the network. There are rumours that traffic on other parts of the network is not encrypted
- Locations revealed (but not as accurate as GPS)
- DoS attacks (to other phones by sending malformed/special packets, or try to occupy enough bandwidth so nobody else can connect to a certain cell tower)
There’s hope
- GSM was 2G
- 3G uses mutual authentication -> try to use 3G phones if you can !
- 4G might use AES256 (rumours)
The Monkey Steals The Berries
Directly following the GSM security talk, Tyler Shields, Sr. Security Researcher at VeraCode Inc, continued to surf on the “mobile” wave and started his session about Malicious Mobile Applications (Mobile based spyware, etc).
He stresses that all of the concepts explained are OS independent and apply to Blackberry, iOS, android, etc etc.
While mobile devices have more features, contain data, and allow connections to a corporate network, there’s a lower barrier to entry for malicious people (compared to regular computers), whether the attack is widespread or is a very specific / targeted attack.
The attack
The easiest way to get spyware installed on a mobile device would be to try to slip in malicious code into an application and spread it using official channels. iPhone has the largest number of applications available in the AppStore, so that would be a high profile target.
Commercial tools such as FlexiSpy are robust and offer a wealth of options in terms of spyware… It’s not free and not open source, so it’s unlikely that malicious people will actually trust/use this. Of course, if you are after tracking your children, girlfriend or wife, this this may be an option :). Another commercial platform is Mobile Spy. It supports a larger number of platforms it can run on, but again, it’s not free, so not very interesting for hackers.
Enter the “rogue” world. The UAE (United Arab Emirates) pushed a so-called “patch” to all their Blackberry users (which appeared to be spyware). It took a while for people to figure out what happened, but the code was eventually discovered and reversed. It’s just an example that shows how easy it really is to deploy software to mobile devices.
If all of this still sounds a bit unrealistic at this point, take a look at this list of events that happened over the last years (the most recent one just earlier this year) :
- Storm8 (iPhone) was available via the iTunes App Stores, and was injected into iMobsters and Vampires Live (and others). Storm8 said the code was used in development only and that it should not have ended up in the final versions … Yeah right.
- Symbian Sexy Space : poses as a legitimate server (ascserver.exe) and creates a botnet for mobile phone. Symbian actually signed the application as “safe code”’. (The signing process only required a virus check and “not being selected for manual review”) (July 2009)
- Symbian stamped another piece of spyware as safe (2010). This worm spreads as self-signed (untrusted) SIS installers. It originally spread as games, themes, etc.
- 09Droid wrote about 50+ web banking front end applications, and made the app available via Google’s Android Marketplace. His software could allow credentials to be stolen… (there are no signs that he had malicious intents… but think about it, and think about the fact that a huge number of people actually downloaded and installed this tool).
- 3D Anti-Terrorist / PDA Poker Art / Codec Pack : games available on legitimate download sites. Some Russian hackers repackages the games and included a dialer which would dial premium rate 800 numbers.
Looking at the bare facts, Tyler points out that only 23% of the smartphone owners actually use the security software installed on the devices. Only 13% of the organizations currently protect from mobile vulnerabilities and risks.
It is clear that, if you cannot technically enforce security policies, you are putting your company and/or private data at risk.
Security mechanisms :
Corporate level : implement mechanisms such as BES (Blackberry)… BUT… most of the default settings in a default installed BES is “Default allow all”. Even the most recent version of BES, a lot of the default options still allow apps to access email data, etc.
Mobile antivirus : implemented on the handheld, but fails (because it only detects known viruses)
Application market place security screening (not very feasible though)
Process around code signing needs to be improved. A lot of processes around code signing are still weak and would allow malicious people to write code and sign it so it would be accepted as good code. RIM does not need a copy of the code, you only need to pay 20$, get a key, and send a hash of the code for API tracking. Once RIM has granted you a key, they cannot revoke a stamped binary. They can only prevent you from signing new code.
How to detect malicious code ?
Signature based detection would be too reactive and thus broken. Maybe resource usage white listing might work, but it’s very complex (too complex for regular users). Sandbox Based Execution Heuristics might work… but it is reactive again (because you need to run it in the sandbox. the application might break because of this, so you may not be able to fully detect malicious code).
Reverse engineering the code works… but is very difficult. (List data sources, list ways how data can be extracted/transmitted, and build data flows).
There is no real solution… We are trusting vendor application stores and there are minimal methods of finding out if you got infected or not.
Finally, Tyler introduces his own piece of spyware, written specifically for Blackberry, and explains that there are many ways to get this installed. You can convince a user to install it using Desktop Manager, comprise a BES server and push it to all users, or get it added to the Marketplace, and remove it again after a few days. The tool would allow malicious people to control the functionality using SMS (a Google Voice account is enough :) ).
Good talk, should make you rethink whether you want to allow your employees to use any phone to connect to your company network/store company information or not, and should trigger you to review your BES setup.
In addition to that, implement best-of-breed antivirus, make sure you enforce settings and application policies, and – if possible – make sure outgoing traffic is forced to use company monitoring/filtering devices.
Cyber [Crime|War] – Connecting The Dots
Right after lunch, Iftach Ian Amit initiated the afternoon sessions with his Cyber[Crime|War] talk.
Before starting the actual talk, Iftach mentioned that he won’t be demonstrating/explaining anything that he didn’t see or tried for himself.
He starts by explaining the difference between war and crime. It’s all about context…
- War : government operated, official backing, official resources, financing
- Crime : private
When it comes down to CyberWar, in a lot of countries, CyberWar is often being covered up (or just denied) afterwards… “There is no CyberWar, except for… other countries.“
At the same time, most of these main countries can be associated with or connected to CyberWar are USA, Russia, China, Israel, Iran. In the USA, there is thoroughly documented activity surrounding CyberSecurity and a lot of official teams and organizations that have a direct relation with CyberStuff.
Russia, has similar organizations (GRU, SVR, FSB, “Center for Research of Military Strength of Foreign Countries” and so on). China has PLA (People’s Liberation Army), and has departments that work on the defensive and offensive part. Iran has the Telecommunications Infrastructure co. (Governemnent controlled), reporting to the Iranian Military. Israel has IDF (Israel Defense Forces), C4L. Staffing in Israel is mostly homegrown (in-house training). And what about Mossad ? (check jobs section on mossad.gov.il)
What is CyberWar? Iftach states that it’s a highly selective targeting of military/critical resources, in conjunction with a kinetic attack. In most cases, it is synchronized with actions on the ground. Alternatively, it may include performing massive DDoS to disrupt operations in a certain region. If a country has to counter a DDoS, they often tend to lower security measures in order to cope with it.
Either way, the result is that a lot of citizens get hit by it as well.
CyberCrime is mostly associated with money. That means that, if you can track the money / money objectives, you find the criminals. Attacks use the regular channels (web, mail, …), but use custom tools/sploits
The main target locations of CyberCrime are US and Europe. (That’s where the money is, right ?). The ZeuS botnet is a good example of how a click-and-point tool can produce nasty malware and create a huge bot in a very effective way.
From a defense point of view, there is a lot of technology available… Anti Virus/Malware/Spyware/Rootkit/Trojan/…, firewalls, IPS, … but it doesn’t really work if you are fighting serious competition. We are at least one step behind all the time. (False sense of security) So that means that the criminals are free to play.
Anyways, in this talk, Iftach tried to connect the dots and explain that there might be a thin line between CyberCrime and CyberWar.
Iftach mentiones incidents in Estonia, Israel, Georgia. By looking at some facts around the incidents, it becomes clear that CyberCrime organizations are widespread, and well organized. It all seems related to CyberCrime and Hacktivism… If you look closer at the facts (Iftach just used the Georgian incident), you quickly start to see relations :
- DoS against president website / governement websites using C&C/botnets
- Troops enter Georgia
- Additional C&C servers come online and continue to attack gov websites, AND commercial websites in order to extort them.
- Coïncidence ?
What initially looked like an “ordinary” CyberCrime attack, might have been part of a bigger plan.
Iftach continues.
On december 18, 2009, a Twitter DNS attack was attributed to Iran activity. Twitter got defaced by “Iranian Cyber Army”, but until dec 2009 there was no group with that name. But the defacement page looks very similar to the ones posted on the Ashiyane forums. On those forums, in the wargame section, Iftach found a link to a wargame that targeted the website of a gas organization in the US (critical infrastructure ?) Coincidentally, on that very same day, Iranians seized an Iraqi old well on the border.
China : January 12th, Google announce that their infrastructure was hacked by China and intellectual property was stolen, using a “sophisticated coordinated attack”. Adobe basically told the same story, on the same day. Everything looks like as if this is a classic CyberCrime attack. A big company gets attacked, information stolen, and sold ?
Hmmm – maybe CyberCrime and CyberWar might be closer than we think, and might require a new approach.
What’s next ? What does Iftach expect for the (near) future ?
Think about this : A lot of computers are currently being shipped to Africa. Unprotected computers. Botnet.
Iftach also mentions “cloud” and the fact that it has everything (connectivity, etc) to use as part of CyberCrime operations.
CyberCrime is big business, but can be (and is, in some specific cases,) used as a disguise and serves a higher goal.
The message to take away from this presentation is : CyberCrime and CyberWar often tie together. What oftens sounds targeted may be part of a bigger story. Getting infected by malware may not always cause big issues to you as a person, but it might help criminals to perform DDoS attacks on critical infrastructure… or help criminals / hacktivists to draw attention to something else, while something bigger may be going on.
Either way, while nations try to cope with the threats by providing training on cybersecurity, the commercial development of malware still reigns. At the same time, there’s a lack of legislation in a lot of countries, which makes those countries de-facto heavens for a lot of hackers.
This was an entertaining presentation, but I was sometimes having a hard time understanding where it was heading towards. I guess lunch killed me :)
You can find the slides here
Embedded System Hacking And My Plot To Take Over The World
Paul Asadoorian (from PaulDotCom, and Product Evangelist at Tenable (Nessus)) starts his session by explaining that he has a so-called “special affinity” with embedded systems. He still feels that most of these systems are more vulnerable than other systems, and often overlooked at the same time.
Maybe this can allow him to take over the world.
From a general point of view, you need Money, Power, and need to be stealthy when executing the plan to take over the world.
So, how can he use embedded systems to meet those goals and gain world domination ?
Video games, entertainment systems, wireless routers, printers and faxes?
It’s clear that you can make money off video games, entertainment systems (in a legal way). But if you are after making a lot of money, fast, then you need a more aggressive approach.
You would need to be able to manipulate the traffic/information that travels through these embedded systems. Information = power… and money.
A lot of embedded systems are used to control water, electricity and so on. So if you can control those, you haz power.
The “nice” thing, Paul continues, about embedded systems is that nobody really cares about them, unless their broken. A lot of devices have no mouse, keyboard, or logging for that matter. On top of that, some vendors (driven by cost & economics), had to leave out security features to make devices cheaper and faster.
Think “routers”.
Finding the right targets is not hard. Look at wigle.net, find open access points, look at the vendors and ssids… and you’ll know what brand to focus on. Paul quotes wired.com when mentioning that more than 21000 routers were found, having their management website accessible from the internet, configured with a default username/password. Low hanging fruit, sitting ducks, easy targets, quick wins. Name it the way you want, but Paul has a point there.
Luckily, as someone in the audience mentions, newer models of router vendors block the administrative websites from the untrusted interface. Paul replied that you can still find a lot of older systems connected to the web. If someone can connect to it, change configuration, or even upload custom firmware, then he would be one step closer to world domination.
Technically, printers/scanners/multifunctionals could be used for espionage. What if you can connect to a corporate printer and show the list with documents that were printed…. or even get documents off the device & save them to your local computer ? Information = power.
Your list of options is almost infinite… so if you are serious about taking over the world, you know where to start.
The take-away from this session is
- Perimeter control is important. Any time you connect a device to the net, make sure you know/understand what that means. Does it allow for remote management ? Turn it off ! If you cannot disable a certain potentially dangerous protocol, then either cross fingers, disconnect the device or buy a different device.
- Change default passwords, even if remote management is not allowed !
- Only use secure management protocols (yes, even if you are only managing the device from the inside)
- Be aware of the embedded devices you use at home / at work and make sure they are not in a default configuration. If the device works by default, just by plugging it into the network… then beware…
Finalizing his session, Paul mentioned www.securityfail.com, a brand new wiki were people can share their “Security horror / fail stories”, which should help forcing vendors to make those take-aways become reality. Not sure if vendors will actually care… we’ll see.
That’s it for today. I’ll grab some dinner and then head on to the lightning talks. In those talks, people are given a very limited/short timeslot to talk about a specific topic. It might/will be hard to blog about those posts, but if there is something that really stood out between the others, I will certainly update this blog.
If you want to get more info about some of the other talks as well, you definitely should check out http://blog.c22.cc/
Tune in again tomorrow, for day 2 of BruCON 2010.
© 2010 – 2021, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved.