Skip to main content
  1. Posts/

Brewing Hash Cracking Resources with The Twin Cats

·5 mins
passwords hashcat hashcracking

Last Updated: 6-9-2023.

Introduction #

A lot of hash cracking is rooted in data. This data could take many forms, such as a list of probable items, names, and other likely clues. These resources can significantly impact your success at hash cracking. One method I found to be successful is the extraction and transformation of data from the found material into new attacks.

From maskcat ( GitHub), we have a few great ways to create target wordlists that matched popular data points and now with rulecat ( GitHub) we have a perfect pair to create targeted rules.

Entropy and Chaos #

In this example, we will look at a collection of a few breaches involving a fast algorithm and a very well-picked-over left list. In my initial approach, I got 449 new founds, with several not having any cracks in 100+ days and one that was ~600 days without any new submissions. I had put some time into the lists, so several attack methods were exhausted.

Hash List ID Found Hashes New Plains
858 +154 +154
7385 +212 +206
1619 +38 +186

Thankfully I had maskcat and rulecat to assist in creating some new material that was target specific to work with. The first thing I did was use the founds to create a mutated wordlist by implementing token swapping.

$ for i in {1..1000}; do cat founds.tmp | shuf | maskcat mutate 13 >> mutate.lst; done;
$ for i in {1..1000}; do cat founds.tmp | shuf | maskcat mutate 12 >> mutate.lst; done;
$ for i in {1..1000}; do cat founds.tmp | shuf | maskcat mutate 11 >> mutate.lst; done;
...
$ for i in {1..1000}; do cat founds.tmp | shuf | maskcat mutate 4 >> mutate.lst; done;
$ usort mutate.lst

Next, I created new rules using rulecat utilizing the tokens from the founds. This way, I was reintroducing material into the process, and the workload could be more efficiently used using rules.

$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat append >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat append remove >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat append shift >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat prepend >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat prepend remove >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat prepend shift >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat overwrite 0 >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat overwrite 1 >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat overwrite 2 >> target.rule
$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat tokens 99 | rulecat overwrite 3 >> target.rule

I also wanted to create a wordlist of special characters from the masks as it seemed like that was a consistent pattern. To better isolate the frequencies we can convert founds into masks with maskcat then use mode ( Github) to get the count of each item:

$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat | mode -c | head
     67 ?l?l?l?l?l?l?l?d?d?d?d?s:12:3:255
     63 ?l?l?l?l?l?l?d?d?d?d?s?s:12:3:262
     58 ?l?l?l?l?l?l?l?l?d?d?s:11:3:261
     58 ?l?l?l?l?l?l?l?l?d?d?d?d:12:2:248
     58 ?l?l?l?l?l?d?d?d?d?s?s:11:3:236
     38 ?l?l?l?l?l?l?d?d?d?d?s:11:3:229
     34 ?l?l?l?l?l?l?l?d?d?d?d:11:2:222
     30 ?l?l?l?l?l?l?l?l?d?d?d?d?s:13:3:281
     30 ?l?l?l?l?l?d?d?d?d?d?d?s:12:3:223
     26 ?l?l?l?l?l?l?l?l?l?d?s:11:3:277

We can use maskcat in remove mode to isolate the characters then replace them out:

$ cat algo.potfile | awk -F ':' '{print $NF}' | maskcat remove uld > special.tmp

$ cat special.tmp | mode -c | head
    573 !
    430
    239 @
    170 ^^
    153 !!
     78 @@
     69 *
     63 !@
     49 **
     28 #

$ cat special.tmp | mode | head -n 25 > special.rule

# making rules
$ cat special.tk | rulecat append >> special.rule
$ cat special.tk | rulecat append remove >> special.rule
$ cat special.tk | rulecat prepend >> special.rule
$ cat special.tk | rulecat prepend remove >> special.rule

# knowing that most lengths are ~11-14 and some are at the front
$ cat special.tk | rulecat overwrite 0 >> special.rule
$ cat special.tk | rulecat overwrite 10 >> special.rule
$ cat special.tk | rulecat overwrite 11 >> special.rule
$ cat special.tk | rulecat overwrite 12 >> special.rule

Now we are armed with a targeted wordlist and a targeted rule that we can use with our existing materials for new attacks. The best part is after using them, we can create more and use rli.bin to remove duplicates and continue attacks with brand-new material.

# some examples of different attacks used
$ hashcat -m ALGO -a0 mutate.lst -r target.rule --loopback --bitmap-max=27

$ hashcat -m ALGO -a0 mutate.lst -r target.rule -r general.rule --loopback --bitmap-max=27

$ hashcat -m ALGO -a0 mutate.lst -r target.rule -r special.rule --loopback --bitmap-max=27

After several rounds of repeating this process and making new mutations and rules, we ended up with 3548 new founds with some very interesting patterns in a similar time frame:

Recovered........: 3548/488118 (0.73%) Digests (total), 20/488118 (0.00%) Digests (new)
Remaining........: 484570 (99.27%) Digests
qlqlskfk1928
qkrth2dyrj
kjh1477jh!
eodudA6901!
jjinmom5004!
20dbflghgh
gksktkfkd0623^^
qkdqudemr7!
wjddldjaak!@
skaghrla713!
baesomang2@
zmflasksk02
Ekfrldyrjxm1!
ghdwn96hana
tjdudgkdl^^
tjdwlswlsl1127
2alal2Qmsl
qkqngmltjs88!
dlwldmsdlaehk
alsnltm0824!
eotjdwldms7!
dlwls1014702!
gkskchlrh0919!
jiwonzxc66!
090109qlql@
kyu093rbwo
dudyhwh510!
alsxmdndb00!
2524252kkjj@
sksmsgPfla12@
$ mode -c masks.tmp | head -n 15
    110 ?l?l?l?l?l?l?d?d?d?d?s?s:12:3:262
     91 ?l?l?l?l?l?l?l?d?d?d?d?s:12:3:255
     90 ?l?l?l?l?l?d?d?d?d?s?s:11:3:236
     72 ?l?l?l?l?l?l?l?l?d?d?s:11:3:261
     59 ?l?l?l?l?l?l?l?l?d?d?d?d:12:2:248
     46 ?l?l?l?l?l?l?l?l?d?d?d?d?s:13:3:281
     42 ?l?l?l?l?l?l?l?d?d?d?d?s?s:13:3:288
     42 ?l?l?l?l?l?d?d?d?d?d?d?s:12:3:223
     41 ?l?l?l?l?l?l?l?l?d?d?s?s:12:3:294
     41 ?l?l?l?l?l?l?d?d?d?d?s:11:3:229
     38 ?l?l?l?l?l?l?l?l?l?d?s:11:3:277
     34 ?l?l?l?l?l?l?l?d?d?d?d:11:2:222
     33 ?l?l?l?l?d?d?d?d?s?s?s?s:12:3:276
     33 ?l?l?l?d?d?d?d?d?d?d?d?s:12:3:191
     31 ?l?l?l?l?l?l?l?d?d?s?s:11:3:268
Hash List ID Found Hashes New Plains
858 +54 +54
7385 +2821 +2732
1619 +681 +679

Application #

Hopefully, this has inspired some methods to create better hash-cracking resources and shown that following data trends is a defining characteristic of a sound hash-cracking methodology.

Reference #

The following are aliases referenced above:

# unqiue sort file

usort() {
	if [[ $# -ne 1 ]]; then
		echo 'unique sort file inplace'
		echo 'EXAMPLE: usort <FILE>'
	else
		LC_ALL=C sort -u $1 -T ./ -o $1
	fi
}