Thursday, July 27, 2017

BHUSA17: Behind the Plexiglass Curtain: Stats and the Stories from the BlackHat NOC

Bart Stump, Neil Wyler

Both of the presenters have been on the review boards for DefCon and BlackHat and have been working on the NOC (Network Operations Center) for many years. Additionally there are 21 industry professionals in the NOC.

Used Palo Alto Networking as their core firewall vendor, 2G of bandwidth. Fewer wired rooms and fewer APs.  Everybody gets segmented to protect you as much as possible. You're paying a lot to be here, so availability is important, too.  Working with RSA and gigamon this year for analytics.

Most of the gear is in the basement to keep it from being so hot and noisy in the NOC.

The NOC is now on display and has a wifi "cactus". You can look at them, but not come into the actual NOC.

Working with Century Link, PAN, RSA, Gigamon, Ruckus and Pwnie Express.

Hit the limit of their networking capacity for the first time this year. Was saturated the first few hours on Monday AM, when people were downloading Windows updates.

Last year, some changes were made to the network outside of their control and it caused a 4 hour outage. This year they had a much better lock down.

Found rogue access points - one in a plant!

They don'[t block anything or any DNS requests, because many demos or training sessions need to access malware sites.

There were over 300,000 DNS  queries were observed to domains known to be malicious or host malware. Over 12,000 queries went to dynamically generated domains, over 7,800 NEWLY seen domans where queried from here.

The top 2 sites visited for windows updates. And the 5th, 7, 8 and 10th, too. Apple and Ubuntu hit hard, too. They advise users to patch before coming to this conference - many of these hits could be from expo systems, training, VMs, etc.

About 50% of the traffic was encrypted, down from last year. They did see one "VPN" connection that was done in the clear - oops! So, check your VPN is actually encrypting!

Found a new version of Emotet. Saw 404 errors from a site that kept returning different data sizes. This version was released on Tuesday and discovered at BlackHat on Wednesday.

There was 500K unique wireless MACs, 65K unique bluetooth MACs (80% Apple). Many devices moving between trusted BH Wifi network to Open Wifi. Next year people won't be allowed to take laptops in and out of NOC and will use preimaged machines.

Discovered 94 adhoc wifi networks, 55 APs on Non-USA wifi channels and 17 Pineapple APs.

See lots of old, unpatched OSes, out of date IOS, out of date apps. See things like webcams where authentication was encrypted but video stream of home security camera was not...

Going to use the BlackHat wifi? Recommend using VPN (NOTE: I do that at every conference or public wifi).


BHUSA17: Evolutionary Kernel Fuzzing

Richard Johnson

Johnson has been working in fuzzing for awhile and releasing new tools over the last few years. New tool will allow people to do fuzzing in Windows Kernel w/out modifying the binaries.

Kernels are a critical attack surfaces and modern mitigations usually utilize isolation and sandboxing. There are weaponized exploits against the kernel, but little progress in vulndev research.

Evolutionary fuzzing is not a new concept, first introduced at the 2006 BlackHat (Sparks & Cunningham, Sidewinder) and lots of other papers and presentations and open source projects.

Evolutionary fuzzing needs a fast tracing engine, fast logging and a fast evolutionary algorithm. Highly desirable to be easy to use and portable.  His new tool is useful out of the box! First tool only targeted source code originally. American Fuzzy Lop (AFL) features a variety of mutation strategies, block coverage via compile time instrumentation and simplified approach to genetic algorithm.

AFL has a UI and tracks edge transitions. Lots of demos followed.




BHUSA2017: Free-Fall: Tesla Hacking 2016: Hacking Tesla from Wireless to CAN Bus

This is their first time going through all of the remote attacks on the Tesla in public. In September 2016 they successfully implemented a remote attack against the Tesla without physical access to the car.

There was an OLD WebKit used in QtCarBrowser on Tesla. Tesla car automatically scan and connect to known SSIDs, has a "tesla guest" with password abcd123456 in Body shop and Supercharger. QtCarBrowser will automatically reload its current webpage and trigger their webkit exploit.  In Cellular mode they can target phising and mistyped URLs.

Tesla has since patched all of the vulnerabilities found by KeenLab. They had attacked a vuln in JSArray::sort(compareFunction).  They could use this to leak some addresses. They took advantage of type confusion and an overlap in array storage. They shifted the array one time in the compareFunction, copy backed into JSC::JSArray::sort() and then unshif TWICE to trigger increaseVectorPrefixLength() and fastFree arbitrary address (palyload of JSValue-A).

Took advantage of powerful CVE-2011-3928 for leak, with corrupted HTMLInputElement structure.. Arbitrary address READ/WRITE to leak JSCell address of Uint32Array, got address of Uint32Array from JSCell, fastFree the address and defined a new Uint32Array to achieve AAR/RRW.

Finally they got a shell from the browser, but was a low privelge account (browser uid=2222), so they needed to elevate their privilege. The kernel was very old, so there were many well known kernel attacks - were able to fully dump the kernel with a corrupted syscall entry in syscall table.

A brief introduction of the gateway (gw/gtw): it's a powerPC chip running RTOS (most likely FreeRTOS), and SDCard. They were able to get their own code on the firmware. Can do this during an ECU Upgrade (the details are fuzzy here, as the slides moved by really fast). Best - trigger the ECU upgrade by giving a command to the gw (0x08 - update trigger) and get your own boot image loaded and get it to run with taskUpdate.

BlackHat 2017
They sent messages then to other ECUs, including the CAN bus using diagnosis (though the ability to do this is limited when the car is in drive mode). But, can work around this by swapping the handlers.  Still, some ECUs will not respond at all under drive mode. Some ECUs will notice the speed and disable dangerous functions if necessary. They then focused on the forwarding table to block the forwarding process.  Could use the UDS to unlock the ECU, because the seed/key for security access to ECU is fixed

BlackHat 2017
They were able to get access with 3G/Wi-FI, exploit a webKit Brwoser, root the in-vehicle systems, patch and disable appArmor, bypass ECUs firmware integrity verification, reprogram and do dangerous things while the car was driving. Tesla was very responsive and appreciative about the disclosure and fixed the issues in 10 days. Browser security enhancements, firmeware improvements also came out at the same time.  there are now more restictive AppArmor rules (Yes, Tesla uses AppArmor instead of SELinux) and restricted where you can run executibles and where you can write files.

Improved kernel security with all patches and setting CONF-G_SPU_SW_DOMAIN_PAN=Y, NORMAL KERNEL ACCESSES ARE UNABLE TO ACCESS USERSPACE ADDRESSES.

Additionally added code signing everywhere, to protect the ECU, but found some issues due to ECU not being able to verify signature itself. Reported issues to Tesla in the end of June 2017 and all cars have the fixes by July 2017.

Cool video demonstrating the results. Apparently they had to use all of their hacks to do a fun music show with headlights going on and off (single lamp at a time) and doors opening and shutting.

UPDATE: now with the fun video!

Wednesday, July 26, 2017

BHUSA17: Fighting the Previous War

This talk is brought to us by the folks at Thinkst. Back in 2009, they worked on various attacks in the cloud. Attacks like taking extra resources from Amazon. Amazon limited each account to 20 machines, but then they'd get their 20 machines to each get 20 machines... and so on. Cloud is different and needs different thinking.

People still think SaaS as "another webapp" or "just a Linux host", but it is very different and should be treated as such.

Footprinting is under-valued. While pentesting a network, they'd spend the bulk of the time finding all the machines - often long forgotten boxes.

Hardly anybody is setting up sendmail anymore, but their apps are sending emails. They need microservices for this, but where do the responsibilities live?  Who is processing your email, and who is making sure the services you integrate with are behaving properly (or used correctly). (reference: White Hats - Nepal: Bug Bounty from Uber, read).

Canary tokens is a framework released in 2015 is a framework to make honeytokens easy to get. It can help you learn about attacks on your systems.

You have to keep in mind that attacks don't look like they used to - devices are getting harder, but boundaries are fuzzier.

Looking at the Atom editor, there are lots of plugins available - it's easy to put in a malicious plugin. You can then easily steal keystrokes or get a user to unknowingly run commands for you. The plugins are not screened and can be very complicated.

This is no longer just hosted virtual machines. Look at how WordPress Hosting on AWS works - it only touches one virtual machine, but there is a very big graph of very complicated issue.

To attack AWS a good bit of recon is to find account ids. It is a 12-digit number, considered private (but not secure).  You can make one call with a valid account ID and an invalid ID and get different responses.  It works, but it is very sloooooow.

But you can find these more easily by looking at github or on help forums. Even though Amazon says to not post them, people do. Easy!

Another area to do recon is on the S3 bucket discovery, similar discovery vectors. Areas of potential compromise are around the APIs, for example it is possible to enumerate permissions with a variety of calls without actually knowing what the permissions should be.

Sadly running out of battery.... posting now!






BHUSA2017: Automated Testing of Crypto Software Using Differential Fuzzing

JP Aumasson and Yolan Romailler

JP just released a new book on cryptography - check it out. Yolan is working on his masters.

What do we want to accomplish? We want to prove valid functionality works and that the program cannot be abused and secrets won't leak. They are testing code against code. For example, if you're porting from one language to another, they should be able to do the same things. (the assumption is that the reference code is correct - not always true!).  Additionally, want to test the code against the specifications, though those are sometimes not even complete or leave exercises to the users.

Automated testing can cover static analyzers, test vectors, dub fuzzing, smart fuzzing and formal verification. With things like test vectors, the more vectors you have, the better your coverage. That's a lot of testing, so need to think about how to maximize the efficiency (ease of use x coverage).

There are limitations on the current methods, like randomness quality, timing leaks and test vectors focus on valid inputs.

The researchers came up with a new tool, CDF, to do crypto differential fuzzing. It's a command line tool written in Go, portable to WIndows/Linux/MacOS and made it fast so it won't be a bottleneck. This tool will check for both correctness and security of implemetnations and interoperability between implementations. It will check for insecure parameters, non-compliance with standards (e.g. FIPS) and edge cases for specific algorithms.

This is similar to WycheProof, but different. The two tools will complement each other.

One of their checks for ECDSA will make sure that sending 00 hash or 00 secret key will not send it into an infinite loop. For RSA they do various checks for timing leaks.  With their testing, they found potential DoS with OpenSSL.

Discovered that most libraries are not testing the DSA parameters, found issues with almost every library tested. Even though they did not find issues with DSA in crypto++, it doesn't mean they don't exist - just the test did not trip over it.

Several libraries got an infinite loop with generator set to 0 in DSA. It's not allowed by the  standard, but tripped over several libraries.

Still working on adding additional tests and making the suite more robust.








BHUSA17: How We Created the First SHA-1 Collision and What It Means for Hash Security

Elie Bursztein

There are two other types of attacks, besides collision attack, and those are pre-image attacks (not feasible at this time).

When doing this attack, could not use bruteforce as you would need millions of years of bruteforcing with a GPU - we don't have that much time. Better to use cryptanalysis!

Before you can attack, you must understand how a hash function is constructed. In SHA1, we use a Merkle-Damgard construction. This works by taking the first block of the file, adding an IV and perform compression. Then go block by block until you get to the end and you have a short string of compression that is related to the file.

We got  a cool picture of what SHA1 compression looks like "unrolled". To do cryptanlysis, need to look at the messages differential path and the equation system (we have solved 16 steps and that makes it predictable to step 24).

It turns out it is easier to do collision wit two blocks instead of 1 block. Find two blocks that have almost the same hash for a near collision, then resolve the difference to get a full collision.

So - how do you exploit this? Look at a fixed prefix attack for SHA1. Finda prefix before you find a collision. If you carefully choose the prefix, you can improve the attack.

Let's take a look at real world attacks exploiting MD5. In 2009 there was an SSL certificate forgery. This was exploited by leveraging wildcard (instead of domain name) and adding the old public key and signature as a "comment" in the new certificate.

In 2012 there was massive malware called "Flame" that was used to spy on Iranian computers with fake windows update certificates. The collision in practice used 4 blocks, instead of 2. Shows that the attackers had their own cryptographer.

As of today, MD5 is broken - you can create collisions on your cell phone. For SH1 you still need a lot of time with current computing power.

So - how do create a new collision? Choose carefully what prefix you want, as you cannot change it after the fact. Through 2015 and 2016, worked on near-collision blocks. Used about 3000 CPUS for about 8 months to calculate a near collision. Throughout 2016 worked on a full collision attack and 2017 the attack was completed.

You have to find a tradeoff between failure and efficiency and continue to scale the computation. This team did it in 1 hour batches.

In 2016 found their first collision - had to spend a few days analyzing it, and they found a problem. It wasn't usable for finding a full collision.

The team had to make efficient use of GPUs. Work step by step to generate enough solutions for the next step, always try to work at the highest step and backtrack when pool empty.  Also, the work was parallelized: one thread and one solution.

The new attack is called Shattered - it takes 110 GPU for 1 year, which is less than 12milion years for bruteforce. We saw a demo of getting two different PDF files to hash to the same SHA1 hash (but still have different SHA2 hashes)

This is already being exploited, for example on WebKit.  A developer submitted a test to prove WebKit was resistant but tripped an unforeseen bug in SVN and took the site down.

The collision is in the wild - what to do with the legacy software? SHA1 is deeply integrated into GIT - how can they protect themselves? They can test for collisions and has neglible false positives.

The takeaway: SHA1 is dead. Do not do new deployments with it and try to move away from it. We need to continue doing counter-cryptanalysis and keep in mind hash diversity.

BHUSA17: When IoT Attacks: Understanding the Safety Risks Associated with Connected Devices

Billy Rios, Jonathan Butts

There will be 26-30 billion connected devices by 2020! Need to worry about confidentiality, integrity and availability - but is that enough? Is there something more important than keeping your credentials safe? Yes - safety! Many of these devices are controlling environmental factors in laboratories, chemical mixtures, etc.

Rios and Butts looked at devices connected to the internet, in a public space accessible to the general public, and exploitation of the device could result in a safety issue.

Currently, there are only a few devices that meet all 3 criteria. One surprising device they found - car washes! The looked at Laserwash. Car washes are really just industrial control systems (ICS) - and come with all the attitude and controls those systems come with. The car wash is different than most ICSs - it's accessible to the general public, with no screening at all.

The researchers wrote an exploit that can cause a car wash system to physically attack an occupant. Currently there is no patch for the vulnerability - if you own one of these car washes, please contact the manufacturer.

The big takeaway here - you should wear a hard hat to go into a car wash.

When Charlie and Chris attacked a car, they had to buy a car and about $15,000 in tools to analyze the car. The systems are so specialized you must buy specialized tools. In that referenced paper, the tools they bought were only good for Fiat-Chrysler vehicles.

Their cost considerations - acquired firmware in 2014 through compensated operator. Did not find a willing owner until 2017 (which they also had to compensate and pay for the car washes).  Buying a carwash is a large sum ($250K?) - so they really had to find people that already owned them that were interested in the academic interest. shouldn't there be a better way? Should manufacturers give access to systems?  Without this, researchers are looking at live and deployed systems and spending their own money.

Initially disclosed the bug to vendor in February 2015. Reached out repeatedely through April 2017... still no response. In May they got a fully working remote exploit code (PoC) - still no response. Once posted to BlackHat schedule, vendor asked if they tested against a demo system.

From other vendors, got a lot of comments like "that's not how we designed the system to work", etc. so writing up a vulnerability is not sufficient. Researchers had to do the PoC and prove their exploit worked - very costly and time consuming. Could vendors do better?

Need to remember these devices are just a computer - the car wash has storage, cables, disks, programs. Older car washes had a manual physically connected interface with a joy stick to manually control the arm, etc. Now they have close proximity remotes. If you are within line of site of the car wash, you can control the car wash.

BlackHat 2017
When these car washes are deployed, they likely come with warnings for the new owner that the car wash is connected to the Internet. But, why?  You can configure the car wash to send emails. Maybe the owner wants to see how many car washes happened in any given day, which packages are the most popular and what times are the busiest - business reasons.

But - this car wash is on Facebook, YouTube and LinkedIn. Now, that is perplexing.

At the end of the day, this is a computer running WindowsCE, Intrinsyc Rainbow web server with a Binary Gateway Interface. Windows CE is end of life - there is no more support for any vulnerabilities.  The webserver calls mapped to an unmanaged ARM DLLs. There are a lot of DLLs on the system that could be abused.

BlackHat 2017
From the web browser, you can point to various DLLs and access them directly via rbhttp22.dll.

Now... there are credentials. The owner credentials are ...12345. This gives you all access, including free car washes. The engineer creds (PDQENG) are 83340.  But, the researchers don't think having the default creds is a true exploit.

There is a PLC driving the functionality of the car wash. This is a system of systems, lots of communication happening. There are 3 key DLLs for exploits.

They won't be publishing the details of the exploit, because these vulnerabilities are just not fixed.

One of the basic issues is that authentication is handled very simply. The authentication level is set to OWNER before the credentials are checked...Just cause an exception in the authentication routine, and you will remain as OWNER!

It doesn't matter if the owner changes the password, you can read it back.

BlackHat 2017
The researchers identified where the hardware safety mechanisms were - those are difficult to override, much easier to do the software mechanisms. An example of a hardware mechanism si an example of a welded on safety stopper - more difficult to defeat.

Software controls the doors that go up and down, after interacting with sensors that report "all clear". You can exploit the door, for example, to disregard the response from the sensors. Video was shown of a car wash door crushing the hood of a car that was only part way into the car wash.

There is another issue with CVSS scoring. There is a medical infusion device with a vulnerability that can kill the user - it's rated at 7.1.  A bug in a medical cabinet that allows people to steal drugs - 9.7. Why should that be "more severe" than death?  The speakers have additionally been working on a scoring system specifically for medical devices.

You cannot rely on software solely for physical security, and you should never respond to a vulnerability researcher with "the system wasn't designed to do that" :-)

Update: here's the referenced video:






BHUSA2017: BlackHat USA posts will be out of order

Due to connectivity issues, some posts are being written offline. Hope to get them up by the end of the week, but will live blog what I can.