7
4
I have two files, huge.txt
and small.txt
. huge.txt
has around 600M rows and it's 14 GB. Each line has four space separated words (tokens) and finally another space separated column with a number. small.txt
has 150K rows with a size of ~3M, a space separated word and a number.
Both files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-).
The desired output would contain all columns from the huge.txt
file and the second column (the number) from small.txt
where the first word of huge.txt
and the first word of small.txt
match.
My attempts below failed miserably with the following error:
cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt
join: memory exhausted
What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using:
sort -k1 huge.unsorted.txt > huge.txt
sort -k1 small.unsorted.txt > small.txt
The problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictionary sorting using the -d
option bumping into the same error at the end.
I tried loading the files into MySQL, create indexes and join them, but it seems to take weeks on my laptop. (I don't have a computer with more memory or fast disk/SSD for this task)
I see two ways out of this but don't know how to implement any of them.
How do I sort the files in a way that the join command considers them to be sorted properly?
I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes but leave the numbers intact at the end of the lines. Do the sorting and joining with the hashes instead of the strings themselves and finally "translate" back the hashes to strings. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic?
See file samples at the end.
Sample of huge.txt
had stirred me to 46
had stirred my corruption 57
had stirred old emotions 55
had stirred something in 69
had stirred something within 40
Sample of small.txt
caley 114881
calf 2757974
calfed 137861
calfee 71143
calflora 154624
calfskin 148347
calgary 9416465
calgon's 94846
had 987654
Desired output:
had stirred me to 46 987654
had stirred my corruption 57 987654
had stirred old emotions 55 987654
had stirred something in 69 987654
had stirred something within 40 987654
1ok, you provied huge.txt and small.txt .. can you please provide the desired output/result? – akira – 2010-05-27T18:57:01.837
1please see above – dnkb – 2010-05-27T20:22:05.747
Being nosy here but I have to ask. What kind of analysis are you doing with all that data? – Nifle – 2010-05-27T23:10:51.570
1@Nifle: master plan to take over the world :) – akira – 2010-05-28T11:23:49.213
1@Nifle, @akira: almost :) actually this is about processing the famous google web corpus in order to compile stimuli for a psycholinguistic experiment. the numbers are frequencies of the strings on the english language www as google saw it in 2006. i'm sorry if this is disappoinitngly lame reason to churn through all this data :) – dnkb – 2010-05-28T15:40:51.277
@dnkb: did you try my approach? – akira – 2010-05-28T17:08:50.173
not yet. i'm experimenting with something else, once i get home i'll see if it worked. if not i'll try yours. – dnkb – 2010-05-28T20:43:50.277
@akira: Yay, my silly trick worked, see new answer below. Thank you for your help though I appreciate it. – dnkb – 2010-06-01T02:17:13.410