Skip to main content
  1. Posts/

Hashtopolis Infrastructure Guide

·12 mins
passwords hashcat hashcracking

The initial publication was on 1-23-2022 and republished on 12-26-2022.

Distributed Password Cracking with Hashtopolis>

Distributed Password Cracking with Hashtopolis #

In this guide, I want to introduce setting up infrastructure for hash cracking and setting up Hashtopolis server & agents.

Usually, when using hash cracking applications like Hashcat or John the Ripper, the work is run locally on the host machine. This architecture made it challenging to utilize a lot of hardware to work on the same jobs. With tasks sometimes taking days to weeks, the natural answer was to have an overpowered machine with several GPUs to cut down on the processing time.

Another proposed solution is a tool called  Hashtopolis, which utilizes a client-server structure to distribute the keyspace of a task across multiple machines in shorter jobs. A central server could then aggregate these jobs and collect the results. This multi-platform tool written in Python became a great solution to connect the power of multiple machines together to work on a single task.

The tool has two primary components, the client and the server. The server will communicate over HTTP(S) to client machines, passing over files, binaries, and task commands. The client would then act upon those commands, execute the hash-cracking application, and report founds to the server. The server then aggregates the client’s data into its own MySQL database.

Setting Up the Server>

Setting Up the Server #

This guide assumes you have a machine with a dedicated GPU you are configuring for use.

The first step is setting up the operating system. Two great options are Ubuntu 18.04 LTS or Ubuntu 20.04 LTS; these are great because the CUDA drives tend to work with little complications. Almost any Debian build should work, however.

The server will act as the orchestrator and distribute tasks, files, and hash lists to the agent machines, so it will need to be configured with a LAMP (Linux, Apache, MySQL, PHP) stack to fulfill its role.

It is also worth mentioning it is possible to use the same machine for both a server and an agent. Just be mindful of resource consumption, as the server will run many core services for the application. If you implement this, consider symbolically linking the server’s /var/www/hashtopolis/files directory with the agent’s files directory to remove the need to sync files to both directories.

After installing the operating system, setting up SSH, and logging into the machine, its time to install the base packages:

# installing core packages
sudo apt update && sudo apt upgrade -y
sudo apt install apache2 -y
sudo apt install libapache2-mod-php php-mysql php php-gd php-pear php-curl -y
sudo apt install git -y
sudo apt install phpmyadmin -y

# depending on your OS you may find the package listed as either one
sudo apt install mysql-server -y
sudo apt install mariadb -y

# secure the default mysql install
mysql_secure_installation

# clone down the repo
git clone https://github.com/s3inlc/hashtopolis.git
sudo mkdir /var/www/hashtopolis
sudo cp -r hashtopolis/src/* /var/www/hashtopolis
sudo chown -R www-data:www-data /var/www/hashtopolis

# create mysql database and set a password
sudo mysql -uroot -e "create database hashtopolis;"
sudo mysql -uroot -e "GRANT ALL ON hashtopolis.* TO 'hashtopolis'@'localhost' identified by 'PASSWORD';"
sudo mysql -uroot -e "flush privileges;"

Now a few manual steps to configure the correct domain with the server.

# create vhost file for the desired domain
# swap DOMAIN.TLD for your domain
sudo vi /etc/apache2/sites-available/DOMAIN.TLD.conf

# put the following content into the file
<VirtualHost *:80>
 ServerName DOMAIN.TLD
 DocumentRoot /var/www/hashtopolis
 ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

If you plan on configuring it within a private network, you can configure your host file to point the DNS name to the local address. If you maintain an internal DNS server, you can also create the association there.

The next step is configuring Apache to handle the workload, as the agents will communicate back to the server through HTTP(S).

# edit the PHP configuration file and note that the path depends on the installed php version
sudo vi /etc/php/7.2/apache2/php.ini

# search and change the following
memory_limit -> 512M
upload_max_filesize -> 500M
post_max_size -> 500M

# if you are running an internal server and want to max performance more
max_execution_time -> 65000
memory_limit -> 22G
upload_max_filesize -> 21G
post_max_size -> 21G

# edit the Apache configuration file
sudo vi /etc/apache2/apache2.conf

# search and change the following
KeepAliveTimeout -> 10
MaxKeepAliveRequests -> 1000
AllowOverride -> All

# restart Apache
service apache2 reload

Now that Apache should be running, we can set up the Hashtopolis account in a web browser. Navigate to your server in a web browser, and the server will present you with the installation GUI. It will ask you to authenticate into the SQL account we set up earlier to link the services together.

  • Server hostname: localhost
  • Server port: 3306
  • MySQL User: hashtopolis
  • MySQL Password: PASSWORD
  • Database name: hashtopolis

After authenticating, you will be asked to create a new administrator account and presented with a login page. The final step in the initial configuration is to delete the install directory, which is very important for the overall security of the server. After deleting the install directory, ensure that the services start when booted with systemctl.

# remove the install directory
sudo rm -r /var/www/hashtopolis/install

# setting services to start on boot
sudo systemctl enable apache2
sudo systemctl enable mariadb
Setting Up the Agents>

Setting Up the Agents #

The agent code is multi-platform and needs Python installed to operate. The agent will download any files required to run jobs, but items like the drivers we will need to install prior.

In this example, we assume you are installing an agent on the same machine as the server and using NVIDIA drivers. The following instructions show getting set up on a Debian-based operating system. Windows users can download drivers directly from NVIDIA.

# ensure the machine is up to date and install python packages
sudo apt update
sudo apt -y full-upgrade -y
pip3 install requests
pip3 install psutil

# install nvidia and cuda drivers
sudo apt install -y nvidia-driver nvidia-cuda-toolkit

# verify install (should both show nvidia drivers)
nvidia-smi
lspci | grep -i vga

# verify with hashcat
hashcat -I

# fetch the agent file from the server
curl http://DOMAIN.TLD/agents.php?download=1 -o agent.zip
python3 agent.zip

A new agent requires a one-time password generated in the New Agent tab reducing the risk of leaking hashes or files to rogue agents.

To get a code, In the web browser → AgentsNew AgentsCreate New Voucher. With this code, you can finish setting up the agent.

The URL to the API of the install is:

  • https://DOMAIN/api/server.php

The client should be fully operational and can be started with python3 agent.zip. The agent will remain active until the process is killed, so we want to set up a method to maintain persistent agents.

Creating Tasks, Hashlists, and Super Sets>

Creating Tasks, Hashlists, and Super Sets #

With everything configured, there are a few terms to know to make the most out of the server. Additionally, see the  task creation guidelines from the official documentation.

  • Hashlists: list of imported hashes with an associated hash type.
  • Tasks: basically a hashcat command that the server divides among the agents.
  • Preconfigured Tasks: stored templated tasks that can be applied to any hash list.
  • Super Tasks: a list of pre-configured tasks that can be applied all at once to any hash list.
  • Super Hashlists: a list of hash lists of the same type that can be cracked on all at once.

Some other things to note:

  • The list and hash type are represented by the #HL# keyword, which is substituted into the hashcat command.
  • hcchar files are supported while hcmask files are not.
  • PRINCE (Probability Infinite Chained Elements) pre-processor is supported.
  • Sort hashes before you upload them.

The following is an example list of the different types of tasks used in Hashtopolis:

# hybrid left incremental tasks
-a6 #HL# wordlist.lst ?a -O
-a6 #HL# wordlist.lst ?a?a -O

# hybrid right incremental tasks
-a7 #HL# ?a wordlist.lst -O
-a7 #HL# ?a?a wordlist.lst -O

# hybrid attacks with specific charsets
-a6 #HL# hibp.lst ?d?d?d?s -O

# rules based attacks
-a0 #HL# wordlist.lst -r rules.rule -O --loopback
-a0 #HL# wordlist.lst -r rules.rule -r rules.rule -O --loopback

# raking 500k rules (generates random rules)
#HL# wordlist.lst -g 500000 -O --loopback --generate-rules-func-min=2 --generate-rules-func-max=6

# prince task template (select pre-processor wordlist and move --pw-min directive to the pre-processor task)
-a 0 -m 0 #HL# -r prince.rule --pw-min=8

# incremental bruteforce tasks
-a3 #HL# ?a
-a3 #HL# ?a?a
-a3 #HL# ?a?a?a
Detailed Configuration>

Detailed Configuration #

Configuring Persistent Agents>

Configuring Persistent Agents #

A benefit of having persistent agents is that when the machine reboots, the agent will automatically reconnect to the server and continue cracking. Within the webserver GUI, you can disable if an agent should be used on jobs to control when and what machines are cracking.

Windows Agents>

Windows Agents #

For Windows agents, you will want to create a .bat script and use Task Scheduler to start the script on boot. After setting it up, you can view the agent’s status from Task Scheduler and restart it when needed. The following is a copy of a .bat script that you can use to start the agent:

cd "C:\Users\User1\Documents\hashtopolis"
"C:\Users\User1\AppData\Local\Programs\Python\Python39\python.exe" "C:\Users\User1\Documents\hashtopolis\agent.zip"
*Nix Agents>

*Nix Agents #

For *Nix agents, you will want to create a service unit file such as hashtopolis.service and store it in the /etc/systemd/system/ directory. Then set the service to start on boot with sudo systemctl enable hashtopolis.service.

[Unit]
Description=Service for hashtopolis agent
After=network-online.target

[Service]
ExecStart=sudo -u kraken /usr/local/hashtopolis/agent-python/start.sh

[Install]
WantedBy=multi-user.target
Alias=hashtopolis.service

The above unit file will start a Bash script executing the agent code. Below is a copy of the start.sh script referenced:

#!/bin/bash

# code to execute hashtopolis agent as the user "kraken"

cd /usr/local/hashtopolis/agent-python
if [[ "$USER" == "kraken" ]]; then
  /usr/bin/python3 agent.zip
else
  echo "must run as the kraken user"
fi
Extra Parameters for Agents>

Extra Parameters for Agents #

When an agent receives a task, it will execute a hashcat binary to start cracking and report the cracks back to the server. You can do a lot with hashcat being so versatile and supporting many customization options. However, when configuring agents, you should not enter the following parameters into tasks: --gpu-temp-disable, --gpu-temp-retain, and --opencl-devices. If an agent needs these options, add them to the Extra Parameters field in the Agents tab. You can enter workload profiles here, too, as they will be unique to the machine receiving the task. Note that file names are case-sensitive between Windows and *Nix agents.

Server Variables and Speed Benchmarks>

Server Variables and Speed Benchmarks #

The server offers several options to customize its configuration. Many of these settings will work fine as defaults, but customized settings can give more performance depending on your use case.

  • Time in seconds a client should be working on a single chunk
    • When an agent starts a task, a lot of overhead is done to fetch the resources and start Hashcat. Increasing this value is suitable for machines with a lot of GPU power that can process tasks quickly.
  • Use speed benchmark by default
    • Hashtopolis supports two benchmarks: speed and runtime. They mostly have the same performance other than speed is much faster, but when dealing with salted hashes and other cases, the speed benchmark can fail to get the correct speed and cause too big or too small of chunks.
  • Check all hashes of a hash list on import in case they are already cracked
    • This setting will cause uploaded hashes to be searched through the database as they are processed, potentially giving pre-cracked hashes before you even start cracking. This is great for non-salted hashes but can significantly impact the speed with large uploads.

Finally, note that the speeds from the benchmarking feature of Hashcat (-b) will not be the speeds you see on the server. The rates from the benchmarking feature are the absolute best-case scenario, and actual cracking speeds will vary with the type of attack type, number of hashes, number of rules, and other factors.

Configuring MySQL Server Back Ups>

Configuring MySQL Server Back Ups #

One of the best things you can do is configure a script to back up the database in the event of corruption. Below is a copy of a script you can run manually or configure as a cron job.

#!/bin/bash

date=$(date +"%m-%d-%Y")
mysqldump --databases --single-transaction --quick hashtopolis > /mnt/backup/mariadb/$date-hashtopolis.sql
gzip /mnt/backup/mariadb/$date-hashtopolis.sql
  • The --single-transaction parameter will start the transaction before running the backup and create a copy of the current state. Without this parameter, mysqldump will cause the database to lock, preventing IO until the backup completes.
  • If the database is corrupted, you must enter recovery mode and create a backup. Insert innodb_force_recovery = 2 into /etc/mysql/my.cnf then restart the service.
  • To restore the database from a backup, you can use sudo mysql < bkup-hashtopolis.sql.
Exporting Cracked Hashes from Hashtopolis>

Exporting Cracked Hashes from Hashtopolis #

Cracked passwords make an excellent data source. Dumping them directly from MySQL could be more efficient than through the web client. The cracked hashes are within the Hash table from the hashtopolis database, so we can utilize a SQL query to dump the contents into an output file.

MariaDB [hashtopolis]> show columns from Hash;
+-------------+--------------+------+-----+---------+----------------+
| Field       | Type         | Null | Key | Default | Extra          |
+-------------+--------------+------+-----+---------+----------------+
| hashId      | int(11)      | NO   | PRI | NULL    | auto_increment |
| hashlistId  | int(11)      | NO   | MUL | NULL    |                |
| hash        | text         | NO   | MUL | NULL    |                |
| salt        | varchar(256) | YES  |     | NULL    |                |
| plaintext   | varchar(256) | YES  |     | NULL    |                |
| timeCracked | bigint(20)   | YES  |     | NULL    |                |
| chunkId     | int(11)      | YES  | MUL | NULL    |                |
| isCracked   | tinyint(4)   | NO   | MUL | NULL    |                |
| crackPos    | bigint(20)   | NO   |     | NULL    |                |
+-------------+--------------+------+-----+---------+----------------+
mysql
use hashtopolis
select hash,plaintext from Hash where IsCracked = '1' into outfile '/tmp/passwords.tmp' columns terminated by ':';
$ cat passwords.tmp
492B273341237DA1DC32912508117609:table86487
492B3F8B9CC3B13777D81EE3AA62FB53:pike77
492B2EA8477CD7E5D45F4D4B2307CB0E:bleakley1
492B31D55A4CC2659E15368F0E289E8C:cabada1

That concludes this guide. We installed a Hashtopolis server and agents, configured the server and agent, created tasks and hash lists, and covered overall usage.