Dither a Grayscale Image

23

5

Dither a grayscale image into pure black and white with your own algorithm.

Guidelines: You must come up with your own new algorithm. You cannot use pre-existing algorithms (ex. Floyd-Steinburg) but you can use the general technique. Your program must be able to read an image and produce an image the same size. This is a popularity contest, so whoever produces the best (closest to original) and most creative (determined by votes) wins. Bonus if the code is short, though this is not necessary.

You can use whatever grayscale image you want as input, it should be larger than 300x300. Any file format is fine.

Example input:

puppy

Example output:

dithered

This is a pretty good job, but there still are visible lines and patterns.

qwr

Posted 2014-05-03T21:16:44.520

Reputation: 8 929

@A.L I was hoping to see an answer to this. I've done some searching and found that it was a popular "wallpaper" distributed since about 2007 by "Crazy-Frankenstein.com", and I found a copy in a blog archive of June 2005 http://valentinedayisu.blogspot.com/2005/06/canine-wallpaper.html, without the craxy-frankenstein watermark.

– Glenn Randers-Pehrson – 2017-01-03T01:55:07.230

Hm. Without any restriction on the code what prevents people from just implementing some established algorithm that is known to give "the best" results? – Martin Ender – 2014-05-03T21:24:43.617

@m.buettner I thought about this but I couldn't come up with a good solution. I don't want to do byte count, because that would give some languages a large advantage. Maybe speed and memory usage. – qwr – 2014-05-03T21:29:14.423

4+1 for an interesting challenge, but I think this would be much better as a [code-golf] (with a spec) or some other completely objective criterion. – Doorknob – 2014-05-03T21:30:31.353

2The problem with code-size, speed and memory usage is that you'd need an objective threshold for how recognisable the result must be for the answer to be valid, which is also quite impossible. Popularity contest does make sense, but without any restriction on the code there is no incentive for people to think out of the box. I'd rather upvote a clever answer than one gives the best result because it just implemented an existing algorithm. But you are currently incentivising the latter. – Martin Ender – 2014-05-03T21:31:05.377

@m.buettner I've edited the criteria so a new algorithm has to be made and creativity is taken into account. This doesn't stop people from slightly modifying an existing algorithm, but I hope it encourages creativity (every realistic way to dither has already been developed) – qwr – 2014-05-03T21:37:38.547

As a reference, a simple mean threshold of this image is here. Technically, that counts as dithering, according to Wikipedia.

– Geobits – 2014-05-04T00:05:05.333

3The line between an algorithm and its technique is too thin to determine which side something falls. – Peter Taylor – 2014-05-04T05:55:42.207

2I think it would be a lot easier to compare the results if they all showed results from the same image. – joeytwiddle – 2014-05-04T17:55:41.587

1@Doorknob I disagree. Image quality is, by definition, subjective. If you put a code-golf tag on this, you will end up with a load of answers giving 50% black/white threshold. It's a rare example of correct application of the popularity-contest tag (i.e, when there is no way of judging objectively.) – Level River St – 2014-05-04T19:31:02.070

@steveverrill By "with a spec" I mean "with an exact specification of what the output should be, i.e. every solution produces the same output." – Doorknob – 2014-05-04T19:31:53.950

1

@Doorknob defining the output would be both difficult and limiting. Much of what makes image processing interesting is that you try an algorithm and you have no idea if it's going to be any good until you see the results. Remember this question? Very broad, 182 upvotes, no downvotes. If every question had a tight spec we would never have seen this. http://codegolf.stackexchange.com/q/22144/15599

– Level River St – 2014-05-04T19:43:40.627

3Can you add the source of the image? (I don't think someone will be angry to see its his/her image here, but it's fair to cite the source) – A.L – 2014-05-07T01:57:49.163

Answers

16

Fortran

Okay, I'm using an obscure image format called FITS which is used for astronomy. This means there is a Fortran library for reading and writing such images. Also, ImageMagick and Gimp can both read/write FITS images.

The algorithm I use is based on "Sierra Lite" dithering, but with two improvements:
a) I reduce the propagated error by a factor 4/5.
b) I introduce a random variation in the diffusion matrix while keeping its sum constant.
Together these almost completely elminiate the patterns seen in OPs example.

Assuming you have the CFITSIO library installed, compile with

gfortran -lcfitsio dither.f90

The file names are hard-coded (couldn't be bothered to fix this).

Code:

program dither
  integer              :: status,unit,readwrite,blocksize,naxes(2),nfound
  integer              :: group,npixels,bitpix,naxis,i,j,fpixel,un
  real                 :: nullval,diff_mat(3,2),perr
  real, allocatable    :: image(:,:), error(:,:)
  integer, allocatable :: seed(:)
  logical              :: anynull,simple,extend
  character(len=80)    :: filename

  call random_seed(size=Nrand)
  allocate(seed(Nrand))
  open(newunit=un,file="/dev/urandom",access="stream",&
       form="unformatted",action="read",status="old")
  read(un) seed
  close(un)
  call random_seed(put=seed)
  deallocate(seed)

  status=0
  call ftgiou(unit,status)
  filename='PUPPY.FITS'
  readwrite=0
  call ftopen(unit,filename,readwrite,blocksize,status)
  call ftgknj(unit,'NAXIS',1,2,naxes,nfound,status)
  call ftgidt(unit,bitpix,status)
  npixels=naxes(1)*naxes(2)
  group=1
  nullval=-999
  allocate(image(naxes(1),naxes(2)))
  allocate(error(naxes(1)+1,naxes(2)+1))
  call ftgpve(unit,group,1,npixels,nullval,image,anynull,status)
  call ftclos(unit, status)
  call ftfiou(unit, status)

  diff_mat=0.0
  diff_mat(3,1) = 2.0 
  diff_mat(1,2) = 1.0
  diff_mat(2,2) = 1.0
  diff_mat=diff_mat/5.0

  error=0.0
  perr=0
  do j=1,naxes(2)
    do i=1,naxes(1)
      p=max(min(image(i,j)+error(i,j),255.0),0.0)
      if (p < 127.0) then
        perr=p
        image(i,j)=0.0
      else
        perr=p-255.0
        image(i,j)=255.0
      endif
      call random_number(r)
      r=0.6*(r-0.5)
      error(i+1,j)=  error(i+1,j)  +perr*(diff_mat(3,1)+r)
      error(i-1,j+1)=error(i-1,j+1)+perr*diff_mat(1,2)
      error(i  ,j+1)=error(i ,j+1) +perr*(diff_mat(2,2)-r)
    end do
  end do

  call ftgiou(unit,status)
  blocksize=1
  filename='PUPPY-OUT.FITS'
  call ftinit(unit,filename,blocksize,status)
  simple=.true.
  naxis=2
  extend=.true.
  call ftphpr(unit,simple,bitpix,naxis,naxes,0,1,extend,status)
  group=1
  fpixel=1
  call ftppre(unit,group,fpixel,npixels,image,status)
  call ftclos(unit, status)
  call ftfiou(unit, status)

  deallocate(image)
  deallocate(error)
end program dither

Example output for the puppy image in OPs post:
Dithered picture of puppy
OPs example output:
OPs dithered picture of puppy

semi-extrinsic

Posted 2014-05-03T21:16:44.520

Reputation: 641

This looks really good, might be unbeatable for quality – aditsu quit because SE is EVIL – 2014-05-04T14:16:59.807

Thanks! I don't know that it's unbeatable, but it's going to be hard (very subjective) to judge this against other good algorithms. – semi-extrinsic – 2014-05-04T18:13:18.337

1I know I golf code by abusing backwards compatibility, but it actually looks like you abuse it as standard. This code is actually making me cry. – Kyle Kanos – 2014-05-04T19:17:53.643

@KyleKanos I'm always happy when my code makes someone cry :p On topic though, what specifically is horrible here?
Yes, I could've used "implicit none", but where's the fun in that? I do use it for serious coding at work, but not for golfing. And I definitely agree that the CFITSIO library API is completely horrible (ftppre() outputs a FITS image in single real precision, ftpprj() outputs an image in double integer precision, etc.) but that's F77 backwards compatibility for you.
– semi-extrinsic – 2014-05-05T07:09:52.177

I should also add: I used "diff_mat(3,1)" etc. instead of just putting the coefficients inside the loop because I wanted the algorithm to be easier to spot. – semi-extrinsic – 2014-05-05T07:12:23.190

Character with * notation instead of len=###, sparing use of :: in variable declaration, lack of program <name>/end program, & general indentation. What the code does is awesome, though. – Kyle Kanos – 2014-05-05T10:48:38.930

1Okay, so most of those were just me being sloppy. I improved it. Constructive criticism is always appreciated :) – semi-extrinsic – 2014-05-05T17:56:38.940

34

GraphicsMagick/ImageMagick

Ordered Dither:

magick B2DBy.jpg -gamma .45455 -ordered-dither [all] 4x4 ordered4x4g45.pbm

Before complaining about my using an "established algorithm" please read the ChangeLog for GraphicsMagick and ImageMagick for April 2003 where you'll see that I implemented the algorithm in those applications. Also, the combination of "-gamma .45455" with "-ordered-dither" is new.

The "-gamma .45455" takes care of the image being too light. The "all" parameter is only needed with GraphicsMagick.

There's banding because there are only 17 graylevels in a 4x4 ordered-dither image. The appearance of banding can be reduced by using an 8x8 ordered-dither which has 65 levels.

Here are the original image, the 4x4 and 8x8 ordered dithered output and the random-threshold output: enter image description here

I prefer the ordered-dither version, but am including the random-threshold version for completeness.

magick B2DBy.jpg -gamma .6 -random-threshold 10x90% random-threshold.pbm

The "10x90%" means to render below 10 percent intensity pixels as pure black and above 90 percent as pure white, to avoid having a few lonely specks in those areas.

It's probably worth noting that both are as memory-efficient as they can possibly be. Neither does any diffusion, so they work one pixel at a time, even when writing ordered-dither blocks, and don't need to know anything about the neighboring pixels. ImageMagick and GraphicsMagick process a row at a time, but it's not necessary for these methods. The ordered-dither conversions take less than .04 second real time on my old x86_64 computer.

Glenn Randers-Pehrson

Posted 2014-05-03T21:16:44.520

Reputation: 1 877

31"Before complaining about my using an "established algorithm" please read the ChangeLog for GraphicsMagick and ImageMagick for April 2003 where you'll see that I implemented the algorithm in those applications." +1 for sheer cheek. – Joe Z. – 2014-05-04T18:03:49.443

22

I apologize for the code style, I threw this together using some libraries we just built in my java class, and it has a bad case of copy-paste and magic numbers. The algorithm picks random rectangles in the image, and checks if the average brightness is greater in the dithered image or the original image. It then turns a pixel on or off to bring the brightnesses closer in line, preferentially picking pixels that are more different from the original image. I think it does a better job bringing out thin details like the puppy's hair, but the image is noisier because it tries to bring out details even in areas with none.

enter image description here

public void dither(){
    int count = 0;
    ditheredFrame.suspendNotifications();
    while(count < 1000000){
        count ++;
        int widthw = 5+r.nextInt(5);
        int heightw = widthw;
        int minx = r.nextInt(width-widthw);
        int miny = r.nextInt(height-heightw);



            Frame targetCropped = targetFrame.crop(minx, miny, widthw, heightw);
            Frame ditherCropped = ditheredFrame.crop(minx, miny, widthw, heightw);

            if(targetCropped.getAverage().getBrightness() > ditherCropped.getAverage().getBrightness() ){
                int x = 0;
                int y = 0;
                double diff = 0;

                for(int i = 1; i < ditherCropped.getWidth()-1; i ++){
                    for(int j = 1; j < ditherCropped.getHeight()-1; j ++){
                        double d = targetCropped.getPixel(i,  j).getBrightness() - ditherCropped.getPixel(i, j).getBrightness();
                        d += .005* targetCropped.getPixel(i+1,  j).getBrightness() - .005*ditherCropped.getPixel(i+1, j).getBrightness();

                        d += .005* targetCropped.getPixel(i-1,  j).getBrightness() - .005*ditherCropped.getPixel(i-1, j).getBrightness();

                        d += .005* targetCropped.getPixel(i,  j+1).getBrightness() -.005* ditherCropped.getPixel(i, j+1).getBrightness();

                        d += .005* targetCropped.getPixel(i,  j-1).getBrightness() - .005*ditherCropped.getPixel(i, j-1).getBrightness();

                        if(d > diff){
                            diff = d;
                            x = i;
                            y = j;
                        }
                    }
                    ditherCropped.setPixel(x,  y,  WHITE);
                }

            } else {
                int x = 0;
                int y = 0;
                double diff = 0;

                for(int i = 1; i < ditherCropped.getWidth()-1; i ++){
                    for(int j = 1; j < ditherCropped.getHeight()-1; j ++){
                        double d =  ditherCropped.getPixel(i, j).getBrightness() -targetCropped.getPixel(i,  j).getBrightness();
                        d += -.005* targetCropped.getPixel(i+1,  j).getBrightness() +.005* ditherCropped.getPixel(i+1, j).getBrightness();

                        d += -.005* targetCropped.getPixel(i-1,  j).getBrightness() +.005* ditherCropped.getPixel(i+1, j).getBrightness();

                        d += -.005* targetCropped.getPixel(i,  j+1).getBrightness() + .005*ditherCropped.getPixel(i, j+1).getBrightness();

                        d += -.005* targetCropped.getPixel(i,  j-1).getBrightness() + .005*ditherCropped.getPixel(i, j-1).getBrightness();



                        if(d > diff){
                            diff = d;
                            x = i;
                            y = j;
                        }
                    }
                    ditherCropped.setPixel(x,  y,  BLACK);
                }
            }


    }
    ditheredFrame.resumeNotifications();
}

QuadmasterXLII

Posted 2014-05-03T21:16:44.520

Reputation: 881

I take it that this is deterministic? If so, how fast is it? – Οurous – 2014-05-06T03:04:00.100

It's random, and takes about 3 seconds on my computer. – QuadmasterXLII – 2014-05-06T03:18:49.670

2While perhaps not the greatest-fidelity algorithm, the results are art all by themselves. – AJMansfield – 2014-05-06T23:57:52.000

5I really like the look of this algorithm! But I think maybe it looks so good partly because it produces a texture roughly similiar to fur, and this is an animal with fur. But I am not completely sure this is true. Could you post another image of e.g. a car? – semi-extrinsic – 2014-05-08T06:54:53.857

1I think this is the best answer, both in terms of originality of the algorithm and in terms of awesomeness of results. I'd also really like to see it run on some other images as well. – Nathaniel – 2014-10-06T03:20:51.213

13

Ghostscript (with little help of ImageMagick)

Far from being my 'own new algorithm', but, sorry, could not resist it.

convert puppy.jpg puppy.pdf && \
convert puppy.jpg -depth 8 -colorspace Gray -resize 20x20! -negate gray:- | \
gs -q -sDEVICE=ps2write -o- -c \
    '<</Thresholds (%stdin) (r) file 400 string readstring pop 
       /HalftoneType 3 /Width 20 /Height 20
     >>sethalftone' \
    -f puppy.pdf | \
gs -q -sDEVICE=pngmono -o puppy.png -

enter image description here

Of course it works better without 'same size' restraint.

user2846289

Posted 2014-05-03T21:16:44.520

Reputation: 1 541

2This is hilarious. I am stunned by the fact that no one has commented on this Warhol-style marvel. – Andreï Kostyrka – 2016-08-31T17:21:02.337

10

JAVA

Here is my submission. Takes a JPG image, calculates pixel per pixel luminosity (thanks to Bonan in this SO question) and then check it against a random pattern for knowing if the resulting pixel will be black or white. Darkerst pixels will be always black and brightest pixels will be always white to preserve image details.

import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Random;

import javax.imageio.ImageIO;

public class DitherGrayscale {

    private BufferedImage original;
    private double[] threshold = { 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31,
            0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42,
            0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53,
            0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64,
            0.65, 0.66, 0.67, 0.68, 0.69 };


    public static void main(String[] args) {
        DitherGrayscale d = new DitherGrayscale();
        d.readOriginal();
        d.dither();

    }

    private void readOriginal() {
        File f = new File("original.jpg");
        try {
            original = ImageIO.read(f);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private void dither() {
        BufferedImage imRes = new BufferedImage(original.getWidth(),
                original.getHeight(), BufferedImage.TYPE_INT_RGB);
        Random rn = new Random();
        for (int i = 0; i < original.getWidth(); i++) {
            for (int j = 0; j < original.getHeight(); j++) {

                int color = original.getRGB(i, j);

                int red = (color >>> 16) & 0xFF;
                int green = (color >>> 8) & 0xFF;
                int blue = (color >>> 0) & 0xFF;

                double lum = (red * 0.21f + green * 0.71f + blue * 0.07f) / 255;

                if (lum <= threshold[rn.nextInt(threshold.length)]) {
                    imRes.setRGB(i, j, 0x000000);
                } else {
                    imRes.setRGB(i, j, 0xFFFFFF);
                }

            }
        }
        try {
            ImageIO.write(imRes, "PNG", new File("result.png"));
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

Processed image

Other examples:

Original Processed

Also works with full color images:

Color image Result

Averroes

Posted 2014-05-03T21:16:44.520

Reputation: 3 771

9

CJam

lNl_~:H;:W;Nl;1N[{[{ri}W*]{2/{:+}%}:P~}H*]z{P}%:A;H{W{I2/A=J2/=210/[0X9EF]=I2%2*J2%+m>2%N}fI}fJ

95 bytes :)
It uses the ASCII PGM (P2) format with no comment line, for both input and output.

The method is very basic: it adds up squares of 2*2 pixels, converts to the 0..4 range, then uses a corresponding pattern of 4 bits to generate 2*2 black-and-white pixels.
That also means that the width and height must be even.

Sample:

deterministic puppy

And a random algorithm in only 27 bytes:

lNl_~*:X;Nl;1N{ri256mr>N}X*

It uses the same file format.

Sample:

random puppy

And finally a mixed approach: random dithering with a bias towards a checkerboard pattern; 44 bytes:

lNl_~:H;:W;Nl;1NH{W{ri128_mr\IJ+2%*+>N}fI}fJ

Sample:

mixed puppy

aditsu quit because SE is EVIL

Posted 2014-05-03T21:16:44.520

Reputation: 22 326

2The first one is comparable to the Nintendo DSi's "Flipnote Studio" application. – BobTheAwesome – 2015-05-14T21:58:13.683

7

Java (1.4+)

I am not sure as to whether I am re-inventing the wheel here but I think it may be unique...

with limited random sequences

With limited random sequences

Pure random dithering

Pure random dithering

enter image description here

City image from Averroes's answer

The algorithm uses the concept of localized luminosity energy and normalizing to retain features. The initial version then used a random jitter to produce a dithered look over areas of similar luminosity. However it was not that visually appealing. To counter this, a limited set of limited random sequences are mapped to raw input pixel luminosity and samples are used iteratively and repeatedly yielding dithered looking backgrounds.

import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;

public class LocalisedEnergyDitherRepeatRandom {

    static private final float EIGHT_BIT_DIVISOR=1.0F/256;
    private static final int KERNEL_SIZE_DIV_2 =4;
    private static final double JITTER_MULTIPLIER=0.01;
    private static final double NO_VARIANCE_JITTER_MULTIPLIER=1.5;
    private static final int RANDOM_SEQUENCE_LENGTH=10;
    private static final int RANDOM_SEQUENCE_COUNT=20;
    private static final boolean USE_RANDOM_SEQUENCES=true;

    public static void main(String args[]) throws Exception
    {
        BufferedImage image = ImageIO.read(new File("data/dog.jpg"));
        final int width = image.getWidth();
        final int height = image.getHeight();
        float[][][] yuvImage =convertToYUVImage(image);
        float[][][] outputYUVImage = new float[width][height][3];
        double circularKernelLumaMean,sum,jitter,outputLuma,variance,inputLuma;
        int circularKernelPixelCount,y,x,kx,ky;
        double[][] randomSequences = new double[RANDOM_SEQUENCE_COUNT][RANDOM_SEQUENCE_LENGTH];
        int[] randomSequenceOffsets = new int[RANDOM_SEQUENCE_COUNT];

        // Generate random sequences
        for (y=0;y<RANDOM_SEQUENCE_COUNT;y++)
        {
            for (x=0;x<RANDOM_SEQUENCE_LENGTH;x++)
            {
                randomSequences[y][x]=Math.random();
            }
        }

        for (y=0;y<height;y++)
        {
            for (x=0;x<width;x++)
            {
                circularKernelLumaMean=0;
                sum=0;
                circularKernelPixelCount=0;

                /// calculate the mean of the localised surrounding pixels contained in 
                /// the area of a circle surrounding the pixel.
                for (ky=y-KERNEL_SIZE_DIV_2;ky<y+KERNEL_SIZE_DIV_2;ky++)
                {
                    if (ky>=0 && ky<height)
                    {
                        for (kx=x-KERNEL_SIZE_DIV_2;kx<x+KERNEL_SIZE_DIV_2;kx++)
                        {
                            if (kx>=0 && kx<width)
                            {
                                double distance= Math.sqrt((x-kx)*(x-kx)+(y-ky)*(y-ky));

                                if (distance<=KERNEL_SIZE_DIV_2)
                                {
                                    sum+=yuvImage[kx][ky][0];
                                    circularKernelPixelCount++;
                                }
                            }
                        }
                    }
                }

                circularKernelLumaMean=sum/circularKernelPixelCount;

                /// calculate the variance of the localised surrounding pixels contained in 
                /// the area of a circle surrounding the pixel.
                sum =0;

                for (ky=y-KERNEL_SIZE_DIV_2;ky<y+KERNEL_SIZE_DIV_2;ky++)
                {
                    if (ky>=0 && ky<height)
                    {
                        for (kx=x-KERNEL_SIZE_DIV_2;kx<x+KERNEL_SIZE_DIV_2;kx++)
                        {
                            if (kx>=0 && kx<width)
                            {
                                double distance= Math.sqrt((x-kx)*(x-kx)+(y-ky)*(y-ky));

                                if (distance<=KERNEL_SIZE_DIV_2)
                                {
                                    sum+=Math.abs(yuvImage[kx][ky][0]-circularKernelLumaMean);
                                }
                            }
                        }
                    }
                }

                variance = sum/(circularKernelPixelCount-1);

                if (variance==0)
                {
                    // apply some jitter to ensure there are no large blocks of single colour
                    inputLuma=yuvImage[x][y][0];
                    jitter = Math.random()*NO_VARIANCE_JITTER_MULTIPLIER;
                }
                else
                {
                    // normalise the pixel based on localised circular kernel
                    inputLuma = outputYUVImage[x][y][0]=(float) Math.min(1.0, Math.max(0,yuvImage[x][y][0]/(circularKernelLumaMean*2)));
                    jitter = Math.exp(variance)*JITTER_MULTIPLIER;
                }

                if (USE_RANDOM_SEQUENCES)
                {
                    int sequenceIndex =(int) (yuvImage[x][y][0]*RANDOM_SEQUENCE_COUNT);
                    int sequenceOffset = (randomSequenceOffsets[sequenceIndex]++)%RANDOM_SEQUENCE_LENGTH;
                    outputLuma=inputLuma+randomSequences[sequenceIndex][sequenceOffset]*jitter*2-jitter;
                }
                else
                {
                    outputLuma=inputLuma+Math.random()*jitter*2-jitter;
                }


                // convert 8bit luma to 2 bit luma
                outputYUVImage[x][y][0]=outputLuma<0.5?0:1.0f;
            }
        }

        renderYUV(image,outputYUVImage);
        ImageIO.write(image, "png", new File("data/dog.jpg.out.png"));
    }

      /**
       * Converts the given buffered image into a 3-dimensional array of YUV pixels
       * @param image the buffered image to convert
       * @return 3-dimensional array of YUV pixels
       */
      static private float[][][] convertToYUVImage(BufferedImage image)
      {
        final int width = image.getWidth();
        final int height = image.getHeight();
        float[][][] yuv = new float[width][height][3];
        for (int y=0;y<height;y++)
        {
          for (int x=0;x<width;x++)
          {
            int rgb = image.getRGB( x, y );
            yuv[x][y]=rgb2yuv(rgb);
          }
        }
        return yuv;
      }

      /**
       * Renders the given YUV image into the given buffered image.
       * @param image the buffered image to render to
       * @param pixels the YUV image to render.
       * @param dimension the
       */
      static private void renderYUV(BufferedImage image, float[][][] pixels)
      {
        final int height = image.getHeight();
        final int width = image.getWidth();
        int rgb;

        for (int y=0;y<height;y++)
        {
          for (int x=0;x<width;x++)
          {

            rgb = yuv2rgb( pixels[x][y] );
            image.setRGB( x, y,rgb );
          }
        }
      }

      /**
       * Converts a RGB pixel into a YUV pixel
       * @param rgb a pixel encoded as 24 bit RGB
       * @return array representing a pixel. Consisting of Y,U and V components
       */
      static float[] rgb2yuv(int rgb)
      {
        float red = EIGHT_BIT_DIVISOR*((rgb>>16)&0xFF);
        float green = EIGHT_BIT_DIVISOR*((rgb>>8)&0xFF);
        float blue = EIGHT_BIT_DIVISOR*(rgb&0xFF);

        float Y = 0.299F*red + 0.587F * green + 0.114F * blue;
        float U = (blue-Y)*0.565F;
        float V = (red-Y)*0.713F;

        return new float[]{Y,U,V};
      }

      /**
       * Converts a YUV pixel into a RGB pixel
       * @param yuv array representing a pixel. Consisting of Y,U and V components
       * @return a pixel encoded as 24 bit RGB
       */
      static int yuv2rgb(float[] yuv)
      {
        int red = (int) ((yuv[0]+1.403*yuv[2])*256);
        int green = (int) ((yuv[0]-0.344 *yuv[1]- 0.714*yuv[2])*256);
        int blue = (int) ((yuv[0]+1.77*yuv[1])*256);

        // clamp to 0-255 range
        red=red<0?0:red>255?255:red;
        green=green<0?0:green>255?255:green;
        blue=blue<0?0:blue>255?255:blue;

        return (red<<16) | (green<<8) | blue;
      }

}

Moogie

Posted 2014-05-03T21:16:44.520

Reputation: 1 505

3Very nice. It definitely gives a different effect than the other answers so far. – Geobits – 2016-02-18T02:19:47.963

@Geobits Yeah, it surprised me how effective it is. However I am not sure whether i would call it a dithering as it produces quite visually different output – Moogie – 2016-02-18T03:17:57.757

That does look quite unique. – qwr – 2016-02-18T03:28:19.597

5

Python

The idea is following: The image gets divided in to n x n tiles. We calculate the average colour each of those tiles. Then we map the colour range 0 - 255 to the range 0 - n*n which gives us a new value v. Then we colour all pixels from that tile black, and randomly colour v pixels within that tile white. It is far from optimal but still gives us recognizable results. Depending on the resolution, it works usually best at n=2 or n=3. While in n=2 you can already find artefacts from the 'simulated colour depth, in case n=3 it can already get somewhat blurry. I assumed that the images should stay the same size, but you can of course also use this method and just double/triple the size of the generated image in order to get more details.

PS: I know that I am a bit late to the party, I remember that I did not have any ideas when the challenge started but now just had this brain wave=)

from PIL import Image
import random
im = Image.open("dog.jpg") #Can be many different formats.
new = im.copy()
pix = im.load()
newpix = new.load()
width,height=im.size
print([width,height])
print(pix[1,1])

window = 3 # input parameter 'n'

area = window*window
for i in range(width//window):     #loop over pixels
    for j in range(height//window):#loop over pixels
        avg = 0
        area_pix = []
        for k in range(window):
            for l in range(window):
                area_pix.append((k,l))#make a list of coordinates within the tile
                try:
                    avg += pix[window*i+k,window*j+l][0] 
                    newpix[window*i+k,window*j+l] = (0,0,0) #set everything to black
                except IndexError:
                    avg += 255/2 #just an arbitrary mean value (when were outside of the image)
                                # this is just a dirty trick for coping with images that have
                                # sides that are not multiples of window
        avg = avg/area
        # val = v is the number of pixels within the tile that will be turned white
        val = round(avg/255 * (area+0.99) - 0.5)#0.99 due to rounding errors
        assert val<=area,'something went wrong with the val'
        print(val)
        random.shuffle(area_pix) #randomize pixel coordinates
        for m in range(val):
            rel_coords = area_pix.pop()#find random pixel within tile and turn it white
            newpix[window*i+rel_coords[0],window*j+rel_coords[1]] = (255,255,255)

new.save('dog_dithered'+str(window)+'.jpg')

Results:

n=2:

enter image description here

n=3:

enter image description here

flawr

Posted 2014-05-03T21:16:44.520

Reputation: 40 560

3

Any file format you want is fine.

Let's define a very compact, theoretical file format for this question as any of the existing file formats have too much overhead to write a quick answer for.

Let the first four bytes of the image file define the width and height of the image in pixels, respectively:

00000001 00101100 00000001 00101100
width: 300 px     height: 300 px

followed by w * h bytes of grayscale values from 0 to 255:

10000101 10100101 10111110 11000110 ... [89,996 more bytes]

Then, we can define a piece of code in Python (145 bytes) that will take this image and do:

import random
f=open("a","rb");g=open("b","wb");g.write(f.read(4))
for i in f.read():g.write('\xff' if ord(i)>random.randint(0,255) else '\x00')

which "dithers" by returning white or black with probability equal to the grayscale value of that pixel.


Applied on the sample image, it gives something like this:

dithered dog

It's not too pretty, but it does look very similar when scaled down in a preview, and for just 145 bytes of Python, I don't think you can get much better.

Joe Z.

Posted 2014-05-03T21:16:44.520

Reputation: 30 589

Can you share an example? I believe this is random dithering, and the results are not the cleanest...nice profile picture though – qwr – 2014-05-03T22:12:04.010

This is indeed random dithering, and I'm making an example of your sample picture at the moment. – Joe Z. – 2014-05-03T22:15:18.017

2I think it might benefit from a contrast boost. I don't know python, but I assume random.randint(0,255) is picking a random number between 0 and 255. Try limiting to between say 55 and 200, which will force any shades outside that range to be pure black or white. With many pictures you can get a good, striking image with no dithering, just a simple threshold. (Random + contrast boost would give an image intermediate between your current image and simple threshold.) – Level River St – 2014-05-03T22:56:11.447

I think random dithering should be called Geiger dithering (because it looks like a Geiger counter's output). Who agrees? – Joe Z. – 2014-05-04T18:04:57.590

1That's almost precisely what ImageMagick and GraphicsMagick do with the "-random-threshold" option that I added along with "-ordered-dither" years ago (added to my answer). Again, bumping the gamma helps with getting the right intensity. I agree with the "Geiger dithering" suggestion. – Glenn Randers-Pehrson – 2014-05-04T19:48:36.907

3

Cobra

Takes a 24-bit or 32-bit PNG/BMP file (JPG produces output with some greys in it). It is also extensible to files containing color.

It uses speed-optimized ELA to dither the image into 3-bit color, which will return as black/white when given your test image.

Did I mention that it's really fast?

use System.Drawing
use System.Drawing.Imaging
use System.Runtime.InteropServices
use System.IO

class BW_Dither

    var path as String = Directory.getCurrentDirectory to !
    var rng = Random()

    def main
        file as String = Console.readLine to !
        image as Bitmap = Bitmap(.path+"\\"+file)
        image = .dither(image)
        image.save(.path+"\\test\\image.png")

    def dither(image as Bitmap) as Bitmap

        output as Bitmap = Bitmap(image)

        layers as Bitmap[] = @[ Bitmap(image), Bitmap(image), Bitmap(image),
                                Bitmap(image), Bitmap(image), Bitmap(image),
                                Bitmap(image)]

        layers[0].rotateFlip(RotateFlipType.RotateNoneFlipX)
        layers[1].rotateFlip(RotateFlipType.RotateNoneFlipY)
        layers[2].rotateFlip(RotateFlipType.Rotate90FlipX)
        layers[3].rotateFlip(RotateFlipType.Rotate90FlipY)
        layers[4].rotateFlip(RotateFlipType.Rotate90FlipNone)
        layers[5].rotateFlip(RotateFlipType.Rotate180FlipNone)
        layers[6].rotateFlip(RotateFlipType.Rotate270FlipNone)

        for i in layers.length, layers[i] = .dither_ela(layers[i])

        layers[0].rotateFlip(RotateFlipType.RotateNoneFlipX)
        layers[1].rotateFlip(RotateFlipType.RotateNoneFlipY)
        layers[2].rotateFlip(RotateFlipType.Rotate270FlipY)
        layers[3].rotateFlip(RotateFlipType.Rotate270FlipX)
        layers[4].rotateFlip(RotateFlipType.Rotate270FlipNone)
        layers[5].rotateFlip(RotateFlipType.Rotate180FlipNone)
        layers[6].rotateFlip(RotateFlipType.Rotate90FlipNone)

        vals = List<of List<of uint8[]>>()
        data as List<of uint8[]> = .getData(output)
        for l in layers, vals.add(.getData(l))
        for i in data.count, for c in 3
            x as int = 0
            for n in vals.count, if vals[n][i][c] == 255, x += 1
            if x > 3.5, data[i][c] = 255 to uint8
            if x < 3.5, data[i][c] = 0 to uint8

        .setData(output, data)
        return output

    def dither_ela(image as Bitmap) as Bitmap

        error as decimal[] = @[0d, 0d, 0d]
        rectangle as Rectangle = Rectangle(0, 0, image.width, image.height)
        image_data as BitmapData = image.lockBits(rectangle, ImageLockMode.ReadWrite, image.pixelFormat) to !
        pointer as IntPtr = image_data.scan0
        bytes as uint8[] = uint8[](image_data.stride * image.height)
        pfs as int = Image.getPixelFormatSize(image.pixelFormat) // 8
        Marshal.copy(pointer, bytes, 0, image_data.stride * image.height)

        for y as int in image.height, for x as int in image.width
            position as int = (y * image_data.stride) + (x * pfs)
            for i in 3
                error[i] -= bytes[position + i]
                if Math.abs(error[i] + 255 - bytes[position + i]) < Math.abs(error[i] - bytes[position + i])
                    bytes[position + i] = 255 to uint8
                    error[i] += 255
                else, bytes[position + i] = 0 to uint8

        Marshal.copy(bytes, 0, pointer, image_data.stride * image.height)
        image.unlockBits(image_data)
        return image

    def getData(image as Bitmap) as List<of uint8[]>

        rectangle as Rectangle = Rectangle(0, 0, image.width, image.height)
        image_data as BitmapData = image.lockBits(rectangle, ImageLockMode.ReadOnly, image.pixelFormat) to !
        pointer as IntPtr = image_data.scan0
        bytes as uint8[] = uint8[](image_data.stride * image.height)
        pixels as List<of uint8[]> = List<of uint8[]>()
        for i in image.width * image.height, pixels.add(nil)
        pfs as int = Image.getPixelFormatSize(image.pixelFormat) // 8
        Marshal.copy(pointer, bytes, 0, image_data.stride * image.height)

        count as int = 0
        for y as int in image.height, for x as int in image.width
            position as int = (y * image_data.stride) + (x * pfs)
            if pfs == 4, alpha as uint8 = bytes[position + 3]
            else, alpha as uint8 = 255
            pixels[count] = @[
                                bytes[position + 2], #red
                                bytes[position + 1], #green
                                bytes[position],     #blue
                                alpha]               #alpha
            count += 1

        image.unlockBits(image_data)
        return pixels

    def setData(image as Bitmap, pixels as List<of uint8[]>)
        if pixels.count <> image.width * image.height, throw Exception()
        rectangle as Rectangle = Rectangle(0, 0, image.width, image.height)
        image_data as BitmapData = image.lockBits(rectangle, ImageLockMode.ReadWrite, image.pixelFormat) to !
        pointer as IntPtr = image_data.scan0
        bytes as uint8[] = uint8[](image_data.stride * image.height)
        pfs as int = Image.getPixelFormatSize(image.pixelFormat) // 8
        Marshal.copy(pointer, bytes, 0, image_data.stride * image.height)

        count as int = 0
        for y as int in image.height, for x as int in image.width
            position as int = (y * image_data.stride) + (x * pfs)
            if pfs == 4, bytes[position + 3] = pixels[count][3] #alpha
            bytes[position + 2] = pixels[count][0]              #red
            bytes[position + 1] = pixels[count][1]              #green
            bytes[position] = pixels[count][2]                  #blue
            count += 1

        Marshal.copy(bytes, 0, pointer, image_data.stride * image.height)
        image.unlockBits(image_data)

Dog

Trees

Οurous

Posted 2014-05-03T21:16:44.520

Reputation: 7 916

To reduce repetition, have you considered creating a temporary variable col, and leaving the image.setPixel(x,y,col) until the very end? – joeytwiddle – 2014-05-04T18:04:09.033

3What's with the trees image? – AJMansfield – 2014-05-07T00:00:08.497

It looks nice, and provides an example of this working with colors as well. – Οurous – 2014-05-07T00:07:46.467

2

Java

Low level code, using PNGJ and a noise addition plus basic diffusion. This implementation requires a grayscale 8-bits PNG source.

import java.io.File;
import java.util.Random;

import ar.com.hjg.pngj.ImageInfo;
import ar.com.hjg.pngj.ImageLineInt;
import ar.com.hjg.pngj.PngReaderInt;
import ar.com.hjg.pngj.PngWriter;

public class Dither {

    private static void dither1(File file1, File file2) {
        PngReaderInt reader = new PngReaderInt(file1);
        ImageInfo info = reader.imgInfo;
        if( info.bitDepth != 8 && !info.greyscale ) throw new RuntimeException("only 8bits grayscale valid");
        PngWriter writer = new PngWriter(file2, reader.imgInfo);
        ImageLineInt line2 = new ImageLineInt(info);
        int[] y = line2.getScanline();
        int[] ye = new int[info.cols];
        int ex = 0;
        Random rand = new Random();
        while(reader.hasMoreRows()) {
            int[] x = reader.readRowInt().getScanline();
            for( int i = 0; i < info.cols; i++ ) {
                int t = x[i] + ex + ye[i];
                y[i] = t > rand.nextInt(256) ? 255 : 0;
                ex = (t - y[i]) / 2;
                ye[i] = ex / 2;
            }
            writer.writeRow(line2);
        }
        reader.end();
        writer.end();
    }

    public static void main(String[] args) {
        dither1(new File(args[0]), new File(args[1]));
        System.out.println("See output in " + args[1]);
    }

}

(Add this jar to your build path if you want to try it).

enter image description here

As a bonus: this is extremely efficient in memory usage (it only stores three rows) so it could be used for huge images.

leonbloy

Posted 2014-05-03T21:16:44.520

Reputation: 121

I like it but it looks a bit blurred around the edges, methinks. – BobTheAwesome – 2015-05-14T21:59:57.020

Nitpick: I would think "used for huge images" is not so important (have you ever seen a >8 GB grayscale PNG?), but "used on e.g. embedded devices" is a much more salient point. – semi-extrinsic – 2014-05-08T06:57:37.313

1

Java

Just a simple RNG-based algorithm, plus some logic for dealing with color images. Has probability b of setting any given pixel to white, sets it to black otherwise; where b is that pixel's original brightness.

import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Random;
import java.util.Scanner;

import javax.imageio.ImageIO;


public class Ditherizer {
    public static void main(String[]a) throws IOException{
        Scanner input=new Scanner(System.in);
        Random rand=new Random();
        BufferedImage img=ImageIO.read(new File(input.nextLine()));
        for(int x=0;x<img.getWidth();x++){
            for(int y=0;y<img.getHeight();y++){
                Color c=new Color(img.getRGB(x, y));
                int r=c.getRed(),g=c.getGreen(),b=c.getBlue();
                double bright=(r==g&&g==b)?r:Math.sqrt(0.299*r*r+0.587*g*g+0.114*b*b);
                img.setRGB(x,y,(rand.nextInt(256)>bright?Color.BLACK:Color.WHITE).getRGB());    
            }
        }
        ImageIO.write(img,"jpg",new File(input.nextLine()));
        input.close();
    }
}

Here's a potential result for the dog image:

enter image description here

SuperJedi224

Posted 2014-05-03T21:16:44.520

Reputation: 11 342

Why don't you add the explanation to the top instead of the bottom where nobody is going to read it? I really like that idea=) – flawr – 2015-11-15T20:38:44.637