American Gothic in the palette of Mona Lisa: Rearrange the pixels

377

214

You are given two true color images, the Source and the Palette. They do not necessarily have the same dimensions but it is guaranteed that their areas are the same, i.e. they have the same number of pixels.

Your task is to create an algorithm that makes the most accurate looking copy of the Source by only using the pixels in the Palette. Each pixel in the Palette must be used exactly once in a unique position in this copy. The copy must have the same dimensions as the Source.

This Python script can be used ensure these constraints are met:

from PIL import Image
def check(palette, copy):
    palette = sorted(Image.open(palette).convert('RGB').getdata())
    copy = sorted(Image.open(copy).convert('RGB').getdata())
    print 'Success' if copy == palette else 'Failed'

check('palette.png', 'copy.png')

Here are several pictures for testing. They all have the same area. Your algorithm should work for any two images with equal areas, not just American Gothic and the Mona Lisa. You should of course show your output.

American Gothic Mona Lisa Starry Night The Scream River Rainbow

Thanks to Wikipedia for the images of famous paintings.

Scoring

This is a popularity contest so the highest voted answer wins. But I'm sure there's lots of ways to be creative with this!

Animation

millinon had the idea that it would be cool to see the pixels rearrange themselves. I thought so too so I wrote this Python script that takes two images made of the same colors and draws the intermediate images between them. Update: I just revised it so each pixel moves the minimum amount it has to. It is no longer random.

First is the Mona Lisa turning into aditsu's American Gothic. Next is bitpwner's American Gothic (from Mona Lisa) turning into aditsu's. It's amazing that the two versions share the exact same color palette.

Mona Lisa to American Gothic animation animating between two versions of American Gothic made from Mona Lisa

The results are really quite astounding. Here is aditsu's rainbow Mona Lisa (slowed to show detail).

rainbow spheres to Mona Lisa animation

This last animation is not necessarily related to the contest. It shows what happens when my script is used to rotate an image 90 degrees.

tree rotation animation

Calvin's Hobbies

Posted 2014-07-09T04:03:39.377

Reputation: 84 000

5

Related: https://github.com/jcjohnson/neural-style

– Vi. – 2015-11-16T16:03:18.827

Apparently vice.com published an article about this

– aditsu quit because SE is EVIL – 2018-01-24T09:27:21.637

Must the process be entirely unguided, or is it permitted to give the program hints about the parts of the image which are most important (e.g. Lisa's face)? – Peter Taylor – 2014-07-09T07:13:52.110

@Quincunx I do know that only one is a png, but I just made sure that my script only considers the RGB values in each image. The strings 'palette.png' and 'copy.png' were just placeholders since PIL will load most image types. If it helps anyone I could put all the images in the same format, say bmp. – Calvin's Hobbies – 2014-07-09T07:30:06.547

@PeterTaylor I'd say that you should not have to manually tell the program what parts of the image to put more detail into. That goes against the idea of your algorithm working in the general case just given two images. You may definitely do this programmatically though. (I'd still love to see your output even if you stick with manual detailing!) – Calvin's Hobbies – 2014-07-09T07:39:11.767

22To increase the hits on your question you may wish to consider entitling it, "American Gothic in the palette of Mona Lisa: Rearrange the pixels" – DavidC – 2014-07-09T12:39:54.103

I always referred to this as "pixel shuffling" https://www.flickr.com/photos/mjswart/295731754/in/photolist-sjEdW-s8GKm-5YQXXJ-4xEMjD-4vUyGR-swN8k-swafc-suDjy

– Michael J Swart – 2014-07-09T14:03:30.267

14Hi, I just want to congratulate you on this original challenge! Very refreshing and interesting. – bolov – 2014-07-09T15:26:00.297

Thanks for the support @bolov. I just added a couple more images for some more variety. – Calvin's Hobbies – 2014-07-09T17:13:43.323

@Calvin'sHobbies I would definitely do this challenge, if I knew how to work with images :P – qwr – 2014-07-09T17:49:29.203

1@qwr It is not that difficult - can you tell me a language you know? I am pretty sure you can easily edit images in most languages - and its great fun=) PS: Calvin'sHobbies you are making sure you keep us busy by adding new images, eh! – flawr – 2014-07-09T20:46:50.917

@flawr I mostly do python. The reason I was hesitant to do this problem is that I wasn't sure that simple luminance matching would work; I wanted maybe edge detection but that's a hard task. – qwr – 2014-07-09T21:11:14.270

@qwr have a look at the codes here, they did similar things (but I do not know how easy/difficult): https://codegolf.stackexchange.com/questions/11141/encode-images-into-tweets-extreme-image-compression-edition

– flawr – 2014-07-09T21:26:08.683

6I'm glad this isn't a [code-golf]. – Ming-Tang – 2014-07-10T00:47:23.497

1Love the animation! :D – eithed – 2014-07-10T06:43:17.733

1Fantastic job with the gif! – millinon – 2014-07-10T21:05:31.107

The bitpwner's AG -> aditsu's is bizarre one. Can't believe that they indeed share the same colours. I guess aditsu's approach, in case of sorting takes into account surrounding pixels as well? Will take a look at that when I've got a moment – eithed – 2014-07-11T11:30:02.713

Why don't you edit your question and include your code here, instead of on a separate site, to stop link rot? – Michael A – 2014-07-11T13:35:03.723

@Ben I just included a shorter version of the validity checker but the animation script is quite long and is somewhat of an afterthought on an question that's already getting too long. – Calvin's Hobbies – 2014-07-11T18:36:04.867

So how long would it take to brute force all combinations of the pixels, and choose the picture with the shortest averaged pixel distance? – Cruncher – 2014-07-11T18:37:50.413

http://gif-explode.com/?explode=http://i.stack.imgur.com/GEeS4.gif how come the frames look wrong? I was interested about the rotating image example. But the last frame when decomposing it is simply incorrect. – Cruncher – 2014-07-11T18:46:14.547

@Cruncher I know. The original images aren't like that, it's just a gif artifact. I may make an hd video of these animations soon so they can be viewed properly. – Calvin's Hobbies – 2014-07-11T18:51:50.430

1

@Calvin'sHobbies http://jsfiddle.net/eithe/J7jEk/31/ - boredom got to me :D

– eithed – 2014-07-11T21:25:22.167

@eithedog Very nice! If you want I actually scaled down my tree, the original is at http://i.stack.imgur.com/PjUCP.png.

– Calvin's Hobbies – 2014-07-11T21:55:02.443

@Calvin'sHobbies - cheers! now with the originals I'll have to figure out why the hell is red moving :D (at least black stays in place) – eithed – 2014-07-11T22:32:48.577

While doing the 90° turn with the tree, the borders should remain mainly untouched. So you are not using the travel distance optimized version, do you? – Leif – 2014-07-12T16:02:56.877

14My mobile data limit gets a terrible burn every time I visit this page. – Vectorized – 2014-07-12T16:50:01.920

1@Calvin'sHobbies When you make HD videos, please make the pixels twice or thrice the size and let them move very slowly and only one normal pixel at a time. This should make it a little bit easier to distinguish the pixels and see them move. It's naturally impossible with normal sized pixels that cannot move less than one pixel at a time. – Leif – 2014-07-12T18:40:44.280

This is well on its way to becoming the most popular question on the site. Congratulations! – Joe Z. – 2014-07-13T00:49:49.060

if you're editing the pixels in a 2D array moving on an expanding circular track from the center, you get a frame-like border. it's not perfect but hides the problem american gothic to mona lisa

– bebe – 2014-07-13T19:16:10.100

For the record, the optimal solution can be computed in O(N^3) using the Hungarian algorithm. Of course, it's not very practical for N = 295 x 357 However, if you enforce a constraint that a point only be sent to one of its 10 closest neighbors, it starts to become tractable. – Arthur B – 2014-07-11T16:26:02.957

The rainbow Mona Lisa one runs much slower than the rest and the animation is really jerky. I think you might want to either edit the time between frames or create more frames. – trlkly – 2014-07-14T03:39:59.693

@trlkly I made it slow so people could see the process better. Adding a lot more frames would put it over the 2MB limit. – Calvin's Hobbies – 2014-07-14T04:10:25.753

@Calvin'sHobbies Then may I at least suggest mentioning that in the Question itself? Like maybe changing the caption above to say "Here is aditsu's rainbow Mona Lisa, running in slow motion so you can see the details." I actually originally thought something was wrong on my end. Also, too bad you can't add any more frames to make it smoother, though. – trlkly – 2014-07-14T04:31:19.097

Good job. I knew there was a shorter wording, but my brain was fried at the time. BTW, did you know that this is the first Question on all Stack Exchange that I've voted up? – trlkly – 2014-07-14T23:48:36.463

The solution keep getting better and better. Awesome – bolov – 2014-07-16T09:41:32.857

Thanks everyone for the support and especially for the continually various and interesting answers. In case anyone was wondering I've kind of stalled on making an animations video and by now there's a pretty small chance it'll happen, sorry. :/ – Calvin's Hobbies – 2014-07-16T10:17:45.887

Answers

159

Java - GUI with progressive randomized transformation

I tried a LOT of things, some of them very complicated, then I finally came back to this relatively-simple code:

import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Random;

import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.Timer;

@SuppressWarnings("serial")
public class CopyColors extends JFrame {
    private static final String SOURCE = "spheres";
    private static final String PALETTE = "mona";
    private static final int COUNT = 10000;
    private static final int DELAY = 20;
    private static final int LUM_WEIGHT = 10;

    private static final double[] F = {0.114, 0.587, 0.299};
    private final BufferedImage source;
    protected final BufferedImage dest;
    private final int sw;
    private final int sh;
    private final int n;
    private final Random r = new Random();
    private final JLabel l;

    public CopyColors(final String sourceName, final String paletteName) throws IOException {
        super("CopyColors by aditsu");
        source = ImageIO.read(new File(sourceName + ".png"));
        final BufferedImage palette = ImageIO.read(new File(paletteName + ".png"));
        sw = source.getWidth();
        sh = source.getHeight();
        final int pw = palette.getWidth();
        final int ph = palette.getHeight();
        n = sw * sh;
        if (n != pw * ph) {
            throw new RuntimeException();
        }
        dest = new BufferedImage(sw, sh, BufferedImage.TYPE_INT_RGB);
        for (int i = 0; i < sh; ++i) {
            for (int j = 0; j < sw; ++j) {
                final int x = i * sw + j;
                dest.setRGB(j, i, palette.getRGB(x % pw, x / pw));
            }
        }
        l = new JLabel(new ImageIcon(dest));
        add(l);
        final JButton b = new JButton("Save");
        add(b, BorderLayout.SOUTH);
        b.addActionListener(new ActionListener() {
            @Override
            public void actionPerformed(final ActionEvent e) {
                try {
                    ImageIO.write(dest, "png", new File(sourceName + "-" + paletteName + ".png"));
                } catch (IOException ex) {
                    ex.printStackTrace();
                }
            }
        });
    }

    protected double dist(final int x, final int y) {
        double t = 0;
        double lx = 0;
        double ly = 0;
        for (int i = 0; i < 3; ++i) {
            final double xi = ((x >> (i * 8)) & 255) * F[i];
            final double yi = ((y >> (i * 8)) & 255) * F[i];
            final double d = xi - yi;
            t += d * d;
            lx += xi;
            ly += yi;
        }
        double l = lx - ly;
        return t + l * l * LUM_WEIGHT;
    }

    public void improve() {
        final int x = r.nextInt(n);
        final int y = r.nextInt(n);
        final int sx = source.getRGB(x % sw, x / sw);
        final int sy = source.getRGB(y % sw, y / sw);
        final int dx = dest.getRGB(x % sw, x / sw);
        final int dy = dest.getRGB(y % sw, y / sw);
        if (dist(sx, dx) + dist(sy, dy) > dist(sx, dy) + dist(sy, dx)) {
            dest.setRGB(x % sw, x / sw, dy);
            dest.setRGB(y % sw, y / sw, dx);
        }
    }

    public void update() {
        l.repaint();
    }

    public static void main(final String... args) throws IOException {
        final CopyColors x = new CopyColors(SOURCE, PALETTE);
        x.setSize(800, 600);
        x.setLocationRelativeTo(null);
        x.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        x.setVisible(true);
        new Timer(DELAY, new ActionListener() {
            @Override
            public void actionPerformed(final ActionEvent e) {
                for (int i = 0; i < COUNT; ++i) {
                    x.improve();
                }
                x.update();
            }
        }).start();
    }
}

All the relevant parameters are defined as constants at the beginning of the class.

The program first copies the palette image into the source dimensions, then repeatedly chooses 2 random pixels and swaps them if that would get them closer to the source image. "Closer" is defined using a color distance function that calculates the difference between the r, g, b components (luma-weighted) together with the total luma difference, with a greater weight for luma.

It takes just a few seconds for the shapes to form, but a while longer for the colors to come together. You can save the current image at any time. I usually waited about 1-3 minutes before saving.

Results:

Unlike some other answers, these images were all generated using the exact same parameters (other than the file names).

American Gothic palette

mona-gothic scream-gothic

Mona Lisa palette

gothic-mona scream-mona spheres-mona

Starry Night palette

mona-night scream-night spheres-night

The Scream palette

gothic-scream mona-scream night-scream spheres-scream

Spheres palette

I think this is the toughest test and everybody should post their results with this palette:

gothic-spheres mona-spheres scream-spheres

Sorry, I didn't find the river image very interesting so I haven't included it.

I also added a video at https://www.youtube.com/watch?v=_-w3cKL5teM , it shows what the program does (not exactly in real-time but similar) then it shows the gradual pixel movement using Calvin's python script. Unfortunately the video quality is significantly damaged by youtube's encoding/compression.

aditsu quit because SE is EVIL

Posted 2014-07-09T04:03:39.377

Reputation: 22 326

Eww. You're extending JFrame! Nice pictures by the way. – Justin – 2014-07-10T15:22:18.883

2@Quincunx And I'm not calling invokeLater either, shoot me :p Also, thanks :) – aditsu quit because SE is EVIL – 2014-07-10T15:23:42.497

After some more testing, it looks like it converges faster to similar results if I use a lower LUM_WEIGHT. Oh well... – aditsu quit because SE is EVIL – 2014-07-10T16:01:41.557

16Best answer so far... – Yuval Filmus – 2014-07-10T19:40:29.897

8When in doubt, brute force it? Seems like an excellent solution, I'd love to see an animation for this, maybe even a video instead of a gif. – Lilienthal – 2014-07-10T20:36:04.393

wow, random magic wins – Display Name – 2014-07-11T18:26:54.730

@aditsu Would it be alright if I used some of your images in a YouTube video to showcase my animation script? – Calvin's Hobbies – 2014-07-11T18:55:13.670

@Calvin'sHobbies sure, go ahead – aditsu quit because SE is EVIL – 2014-07-11T19:11:17.340

3

You could extend the algorithm a bit to a full simulated annealing for a small improvement. What you're doing is already very close (but it's greedy). Finding the permutation that minimizes the distance seems like a hard optimization problem, so this kind of heuristic is fitting. @Lilienthal this is not brute forcing, it's actually close to commonly used optimization techniques.

– Szabolcs – 2014-07-12T05:03:52.413

3This algorithm has the best results by far. And it is so simple. This makes it a clear winner for me. – Leif – 2014-07-12T18:13:14.903

Can you provide a precompiled version of your code? I think I have a compiler, but not everyone does / knows how to. A JAR will do. – John Dvorak – 2014-07-14T05:36:16.850

@JanDvorak I guess I could, but I'd have to change it because the paths are hardcoded, figure out where to host it, etc – aditsu quit because SE is EVIL – 2014-07-14T15:11:42.163

Well done, needless to say you have my vote :) – hobbs – 2014-07-14T20:25:31.757

I modified this to stop itself when it doesn't swap pixels 5000 times in a row. It takes 13 seconds to do that on spheres palette mona lisa. http://pastebin.com/qm2DfnTv

– durron597 – 2014-07-16T15:52:05.767

Very interesting algorithm, definitely deserves the winner imo. Unless there is any conversion done that I cannot see in your initial image setup, it should be a lot faster if you copy the whole array at once: int[] rdbData = source.getRGB(0, 0, source.getWidth(), source.getHeight(), (int[]) null, 0, source.getWidth()); then dest.setRGB(0, 0, dest.getWidth(), dest.getHeight(), rdbData , 0, dest.getWidth()); – TwoThe – 2014-08-09T08:23:01.360

I made a fastest-algorithm challenge for your idea as this seems to be very popular. See here

– TwoThe – 2014-08-11T11:12:27.043

118

Java

import java.awt.Point;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import javax.imageio.ImageIO;

/**
 *
 * @author Quincunx
 */
public class PixelRearranger {

    public static void main(String[] args) throws IOException {
        BufferedImage source = ImageIO.read(resource("American Gothic.png"));
        BufferedImage palette = ImageIO.read(resource("Mona Lisa.png"));
        BufferedImage result = rearrange(source, palette);
        ImageIO.write(result, "png", resource("result.png"));
        validate(palette, result);
    }

    public static class MInteger {
        int val;

        public MInteger(int i) {
            val = i;
        }
    }

    public static BufferedImage rearrange(BufferedImage source, BufferedImage palette) {
        BufferedImage result = new BufferedImage(source.getWidth(),
                source.getHeight(), BufferedImage.TYPE_INT_RGB);

        //This creates a list of points in the Source image.
        //Then, we shuffle it and will draw points in that order.
        List<Point> samples = getPoints(source.getWidth(), source.getHeight());
        System.out.println("gotPoints");

        //Create a list of colors in the palette.
        rgbList = getColors(palette);
        Collections.sort(rgbList, rgb);
        rbgList = new ArrayList<>(rgbList);
        Collections.sort(rbgList, rbg);
        grbList = new ArrayList<>(rgbList);
        Collections.sort(grbList, grb);
        gbrList = new ArrayList<>(rgbList);
        Collections.sort(gbrList, gbr);
        brgList = new ArrayList<>(rgbList);
        Collections.sort(brgList, brg);
        bgrList = new ArrayList<>(rgbList);
        Collections.sort(bgrList, bgr);

        while (!samples.isEmpty()) {
            Point currentPoint = samples.remove(0);
            int sourceAtPoint = source.getRGB(currentPoint.x, currentPoint.y);
            int bestColor = search(new MInteger(sourceAtPoint));
            result.setRGB(currentPoint.x, currentPoint.y, bestColor);
        }
        return result;
    }

    public static List<Point> getPoints(int width, int height) {
        HashSet<Point> points = new HashSet<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                points.add(new Point(x, y));
            }
        }
        List<Point> newList = new ArrayList<>();
        List<Point> corner1 = new LinkedList<>();
        List<Point> corner2 = new LinkedList<>();
        List<Point> corner3 = new LinkedList<>();
        List<Point> corner4 = new LinkedList<>();

        Point p1 = new Point(width / 3, height / 3);
        Point p2 = new Point(width * 2 / 3, height / 3);
        Point p3 = new Point(width / 3, height * 2 / 3);
        Point p4 = new Point(width * 2 / 3, height * 2 / 3);

        newList.add(p1);
        newList.add(p2);
        newList.add(p3);
        newList.add(p4);
        corner1.add(p1);
        corner2.add(p2);
        corner3.add(p3);
        corner4.add(p4);
        points.remove(p1);
        points.remove(p2);
        points.remove(p3);
        points.remove(p4);

        long seed = System.currentTimeMillis();
        Random c1Random = new Random(seed += 179426549); //The prime number pushes the first numbers apart
        Random c2Random = new Random(seed += 179426549); //Or at least I think it does.
        Random c3Random = new Random(seed += 179426549);
        Random c4Random = new Random(seed += 179426549);

        Dir NW = Dir.NW;
        Dir N = Dir.N;
        Dir NE = Dir.NE;
        Dir W = Dir.W;
        Dir E = Dir.E;
        Dir SW = Dir.SW;
        Dir S = Dir.S;
        Dir SE = Dir.SE;
        while (!points.isEmpty()) {
            putPoints(newList, corner1, c1Random, points, NW, N, NE, W, E, SW, S, SE);
            putPoints(newList, corner2, c2Random, points, NE, N, NW, E, W, SE, S, SW);
            putPoints(newList, corner3, c3Random, points, SW, S, SE, W, E, NW, N, NE);
            putPoints(newList, corner4, c4Random, points, SE, S, SW, E, W, NE, N, NW);
        }
        return newList;
    }

    public static enum Dir {
        NW(-1, -1), N(0, -1), NE(1, -1), W(-1, 0), E(1, 0), SW(-1, 1), S(0, 1), SE(1, 1);
        final int dx, dy;

        private Dir(int dx, int dy) {
            this.dx = dx;
            this.dy = dy;
        }

        public Point add(Point p) {
            return new Point(p.x + dx, p.y + dy);
        }
    }

    public static void putPoints(List<Point> newList, List<Point> listToAddTo, Random rand,
                                 HashSet<Point> points, Dir... adj) {
        List<Point> newPoints = new LinkedList<>();
        for (Iterator<Point> iter = listToAddTo.iterator(); iter.hasNext();) {
            Point p = iter.next();
            Point pul = adj[0].add(p);
            Point pu = adj[1].add(p);
            Point pur = adj[2].add(p);
            Point pl = adj[3].add(p);
            Point pr = adj[4].add(p);
            Point pbl = adj[5].add(p);
            Point pb = adj[6].add(p);
            Point pbr = adj[7].add(p);
            int allChosen = 0;
            if (points.contains(pul)) {
                if (rand.nextInt(5) == 0) {
                    allChosen++;
                    newPoints.add(pul);
                    newList.add(pul);
                    points.remove(pul);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pu)) {
                if (rand.nextInt(5) == 0) {
                    allChosen++;
                    newPoints.add(pu);
                    newList.add(pu);
                    points.remove(pu);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pur)) {
                if (rand.nextInt(3) == 0) {
                    allChosen++;
                    newPoints.add(pur);
                    newList.add(pur);
                    points.remove(pur);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pl)) {
                if (rand.nextInt(5) == 0) {
                    allChosen++;
                    newPoints.add(pl);
                    newList.add(pl);
                    points.remove(pl);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pr)) {
                if (rand.nextInt(2) == 0) {
                    allChosen++;
                    newPoints.add(pr);
                    newList.add(pr);
                    points.remove(pr);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pbl)) {
                if (rand.nextInt(5) == 0) {
                    allChosen++;
                    newPoints.add(pbl);
                    newList.add(pbl);
                    points.remove(pbl);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pb)) {
                if (rand.nextInt(3) == 0) {
                    allChosen++;
                    newPoints.add(pb);
                    newList.add(pb);
                    points.remove(pb);
                }
            } else {
                allChosen++;
            }
            if (points.contains(pbr)) {
                newPoints.add(pbr);
                newList.add(pbr);
                points.remove(pbr);
            }
            if (allChosen == 7) {
                iter.remove();
            }
        }
        listToAddTo.addAll(newPoints);
    }

    public static List<MInteger> getColors(BufferedImage img) {
        int width = img.getWidth();
        int height = img.getHeight();
        List<MInteger> colors = new ArrayList<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                colors.add(new MInteger(img.getRGB(x, y)));
            }
        }
        return colors;
    }

    public static int search(MInteger color) {
        int rgbIndex = binarySearch(rgbList, color, rgb);
        int rbgIndex = binarySearch(rbgList, color, rbg);
        int grbIndex = binarySearch(grbList, color, grb);
        int gbrIndex = binarySearch(gbrList, color, gbr);
        int brgIndex = binarySearch(brgList, color, brg);
        int bgrIndex = binarySearch(bgrList, color, bgr);

        double distRgb = dist(rgbList.get(rgbIndex), color);
        double distRbg = dist(rbgList.get(rbgIndex), color);
        double distGrb = dist(grbList.get(grbIndex), color);
        double distGbr = dist(gbrList.get(gbrIndex), color);
        double distBrg = dist(brgList.get(brgIndex), color);
        double distBgr = dist(bgrList.get(bgrIndex), color);

        double minDist = Math.min(Math.min(Math.min(Math.min(Math.min(
                distRgb, distRbg), distGrb), distGbr), distBrg), distBgr);

        MInteger ans;
        if (minDist == distRgb) {
            ans = rgbList.get(rgbIndex);
        } else if (minDist == distRbg) {
            ans = rbgList.get(rbgIndex);
        } else if (minDist == distGrb) {
            ans = grbList.get(grbIndex);
        } else if (minDist == distGbr) {
            ans = grbList.get(grbIndex);
        } else if (minDist == distBrg) {
            ans = grbList.get(rgbIndex);
        } else {
            ans = grbList.get(grbIndex);
        }
        rgbList.remove(ans);
        rbgList.remove(ans);
        grbList.remove(ans);
        gbrList.remove(ans);
        brgList.remove(ans);
        bgrList.remove(ans);
        return ans.val;
    }

    public static int binarySearch(List<MInteger> list, MInteger val, Comparator<MInteger> cmp){
        int index = Collections.binarySearch(list, val, cmp);
        if (index < 0) {
            index = ~index;
            if (index >= list.size()) {
                index = list.size() - 1;
            }
        }
        return index;
    }

    public static double dist(MInteger color1, MInteger color2) {
        int c1 = color1.val;
        int r1 = (c1 & 0xFF0000) >> 16;
        int g1 = (c1 & 0x00FF00) >> 8;
        int b1 = (c1 & 0x0000FF);

        int c2 = color2.val;
        int r2 = (c2 & 0xFF0000) >> 16;
        int g2 = (c2 & 0x00FF00) >> 8;
        int b2 = (c2 & 0x0000FF);

        int dr = r1 - r2;
        int dg = g1 - g2;
        int db = b1 - b2;
        return Math.sqrt(dr * dr + dg * dg + db * db);
    }

    //This method is here solely for my ease of use (I put the files under <Project Name>/Resources/ )
    public static File resource(String fileName) {
        return new File(System.getProperty("user.dir") + "/Resources/" + fileName);
    }

    static List<MInteger> rgbList;
    static List<MInteger> rbgList;
    static List<MInteger> grbList;
    static List<MInteger> gbrList;
    static List<MInteger> brgList;
    static List<MInteger> bgrList;
    static Comparator<MInteger> rgb = (color1, color2) -> color1.val - color2.val;
    static Comparator<MInteger> rbg = (color1, color2) -> {
        int c1 = color1.val;
        int c2 = color2.val;
        c1 = ((c1 & 0xFF0000)) | ((c1 & 0x00FF00) >> 8) | ((c1 & 0x0000FF) << 8);
        c2 = ((c2 & 0xFF0000)) | ((c2 & 0x00FF00) >> 8) | ((c2 & 0x0000FF) << 8);
        return c1 - c2;
    };
    static Comparator<MInteger> grb = (color1, color2) -> {
        int c1 = color1.val;
        int c2 = color2.val;
        c1 = ((c1 & 0xFF0000) >> 8) | ((c1 & 0x00FF00) << 8) | ((c1 & 0x0000FF));
        c2 = ((c2 & 0xFF0000) >> 8) | ((c2 & 0x00FF00) << 8) | ((c2 & 0x0000FF));
        return c1 - c2;
    };

    static Comparator<MInteger> gbr = (color1, color2) -> {
        int c1 = color1.val;
        int c2 = color2.val;
        c1 = ((c1 & 0xFF0000) >> 16) | ((c1 & 0x00FF00) << 8) | ((c1 & 0x0000FF) << 8);
        c2 = ((c2 & 0xFF0000) >> 16) | ((c2 & 0x00FF00) << 8) | ((c2 & 0x0000FF) << 8);
        return c1 - c2;
    };

    static Comparator<MInteger> brg = (color1, color2) -> {
        int c1 = color1.val;
        int c2 = color2.val;
        c1 = ((c1 & 0xFF0000) >> 8) | ((c1 & 0x00FF00) >> 8) | ((c1 & 0x0000FF) << 16);
        c2 = ((c2 & 0xFF0000) >> 8) | ((c2 & 0x00FF00) >> 8) | ((c2 & 0x0000FF) << 16);
        return c1 - c2;
    };

    static Comparator<MInteger> bgr = (color1, color2) -> {
        int c1 = color1.val;
        int c2 = color2.val;
        c1 = ((c1 & 0xFF0000) >> 16) | ((c1 & 0x00FF00)) | ((c1 & 0x0000FF) << 16);
        c2 = ((c2 & 0xFF0000) >> 16) | ((c2 & 0x00FF00)) | ((c2 & 0x0000FF) << 16);
        return c1 - c2;
    };

    public static void validate(BufferedImage palette, BufferedImage result) {
        List<Integer> paletteColors = getTrueColors(palette);
        List<Integer> resultColors = getTrueColors(result);
        Collections.sort(paletteColors);
        Collections.sort(resultColors);
        System.out.println(paletteColors.equals(resultColors));
    }

    public static List<Integer> getTrueColors(BufferedImage img) {
        int width = img.getWidth();
        int height = img.getHeight();
        List<Integer> colors = new ArrayList<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                colors.add(img.getRGB(x, y));
            }
        }
        Collections.sort(colors);
        return colors;
    }
}

My approach works by finding the closest color to each pixel (well, likely the closest), in 3-space, since colors are 3D.

This works by creating a list of all the points we need to fill and a list of all the possible colors we can use. We randomize the list of points (so the image will turn out better), then we go through each point and get the color of the source image.

Update: I used to simply binary search, so red matched better than green which matched better than blue. I now changed it to do six binary searches (all of the possible permutations), then choose the closest color. It only takes ~6 times as long (ie 1 minute). While the pictures are still grainy, the colors match better.

Update 2: I no longer randomize the list. Instead, I choose 4 points following the rule of thirds, then randomly arrange the points, with preference to filling out the center.

Note: See the revision history for the old pictures.

Mona Lisa -> River:

enter image description here

Mona Lisa -> American Gothic:

enter image description here

Mona Lisa -> Raytraced Spheres:

enter image description here

Starry Night -> Mona Lisa:

enter image description here


Here's an animated Gif showing how the image was constructed:

enter image description here

And showing the pixels being taken from the Mona Lisa:

enter image description here

Justin

Posted 2014-07-09T04:03:39.377

Reputation: 19 757

11That's pretty damn amazing. I wouldn't have thought it possible. – AndoDaan – 2014-07-09T07:36:29.223

6I doubt that it would be trivial to do, but it would be amazing to be able to produce an animated version that shows the pixels relocating themselves from the original image into the final one. – millinon – 2014-07-09T07:52:11.857

2I think you misunderstood the problem. You have to rearrange the pixels in the palette to create the copy, not simply use colors from the palette. Each distinct color must be used in the copy exactly the same number of times it appeared in the palette. Your images are not passing my script. – Calvin's Hobbies – 2014-07-09T08:05:41.030

@Quincunx Now that you mention it your getColors seems fine. Above I was thinking that the colors were added to a set which is clearly not the case. I'll double check my script. – Calvin's Hobbies – 2014-07-09T08:32:43.690

7@Quincunx As it turns out my script was correct (though I did simplify it for posterity) and so is your program. For reasons I'm not quite sure of the Mona Lisa image changed slightly when it was uploaded. I noticed that the pixel at (177, 377) had an rgb of (0, 0, 16) online and (0, 0, 14) on my home computer. I've replaced the jpegs with pngs to hopefully avoid issues with a lossy file type. The pixel data in the images should not have changed but it still may be wise to re-download the images. – Calvin's Hobbies – 2014-07-09T09:59:08.867

@millinon mostly because, while pixels are on their way, you'd surely get some overlapping. How do you handle those cases? Randomly pick one of the pixels that are on that square? Average the colours together? The problem with averaging, is that during the transformation you may see colours that don't exist in either image. Maybe an algorithm that does a series of adjacent pixel swaps. To find that order though would be non-trivial as well. – Cruncher – 2014-07-09T13:42:26.407

Iiik, now it seems like looking through a keyhole :D Nice! – eithed – 2014-07-09T22:27:22.877

You should really use HSV distance or something of the sort – Claudiu – 2014-07-10T22:42:04.070

@Claudiu I do what I know how to do. – Justin – 2014-07-10T23:00:46.010

@Calvin'sHobbies you saved your image as JPEG and don't know why it changed? – user253751 – 2014-07-11T07:21:17.667

@immibis No, it was already saved as a jpg but when I uploaded it a few pixels changed. It must have been due to slightly different compression algorithms between PIL and the Stack Exchange image uploader. – Calvin's Hobbies – 2014-07-11T07:34:08.043

@Calvin'sHobbies were you using Lossless JPEG? If not, your image was already not a solution before you uploaded it. – user253751 – 2014-07-11T09:48:24.940

@immibis What? Initially some of the test images were jpgs. But they are all pngs now so it doesn't matter. – Calvin's Hobbies – 2014-07-11T18:37:41.997

@Quincunx Would it be alright if I used some of your images in a YouTube video to showcase my animation script? – Calvin's Hobbies – 2014-07-11T18:54:13.300

@Calvin'sHobbies Sure! I just ask that you link to this answer. Also, I'd love to see the video. Thanks for asking. – Justin – 2014-07-11T19:00:38.857

@Calvin'sHobbies merely saving the image as JPEG is enough to distort it slightly, even without uploading it. – user253751 – 2014-07-12T11:26:24.660

8

This shouldn't be the most popular answer. The algorithm is unnecessarily complicated and the results are bad, although they look interesting. Compare the transformation from Mona Lisa to raytraced spheres with arditsu's result: http://i.stack.imgur.com/WhVcO.png

– Leif – 2014-07-12T18:59:52.530

@Leif I agree. I did come up with that exact same algorithm, but I didn't post it because others already had. – Justin – 2014-07-13T04:24:15.770

1Can we see some results with the color spheres palette? I do like that this focuses on the center of the image and the effect it creates. – John Dvorak – 2014-07-13T05:13:48.863

Can you provide a precompiled version of your code (with source files taken from arguments, console or GUI)? I think I have a compiler, but not everyone does / knows how to. A JAR will do. – John Dvorak – 2014-07-14T05:30:47.413

97

Perl, with Lab color space and dithering

Note: Now I have a C solution too.

Uses a similar approach to aditsu's, (choose two random positions, and swap the pixels at those positions if it would make the image more like the target image), with two major improvements:

  1. Uses the CIE Lab* color space to compare colors — the Euclidean metric on this space is a very good approximation to the perceptual difference between two colors, so the color mappings should be more accurate than RGB or even HSV/HSL.
  2. After an initial pass putting pixels in the best possible single position, it does an additional pass with a random dither. Instead of comparing the pixel values at the two swap positions, it computes the average pixel value of a 3x3 neighborhood centered at the swap positions. If a swap improves the average colors of the neighborhoods it's allowed, even if it makes individual pixels less accurate. For some image pairs this has a dubious effect on the quality (and makes the palette effect less striking), but for some (like spheres -> anything) it helps quite a bit. The "detail" factor emphasizes the central pixel to a variable degree. Increasing it decreases the overall amount of dither, but retains more fine detail from the target image. The dithered optimization is slower, which is why we run it on the output of the non-dithered optimization as a starting point.

Averaging Lab values, like the dither does, isn't really justified (they should be converted to XYZ, averaged, and converted back) but it works just fine for these purposes.

These images have termination limits of 100 and 200 (end the first phase when less than 1 in 5000 swaps is accepted, and the second phase at 1 in 2500), and a dithering detail factor of 12 (a little tighter dither than the previous set). At this super high quality setting, the images take a long time to generate, but with parallelization the whole job still finishes within an hour on my 6-core box. Bumping the values up to 500 or so finishes images within a few minutes, they just look a little less polished. I wanted to show off the algorithm to the best here.

Code is not by any means pretty:

#!/usr/bin/perl
use strict;
use warnings;
use Image::Magick;
use Graphics::ColorObject 'RGB_to_Lab';
use List::Util qw(sum max);

my $source = Image::Magick->new;
$source->Read($ARGV[0]);
my $target = Image::Magick->new;
$target->Read($ARGV[1]);
my ($limit1, $limit2, $detail) = @ARGV[2,3,4];

my ($width, $height) = ($target->Get('width'), $target->Get('height'));

# Transfer the pixels of the $source onto a new canvas with the diemnsions of $target
$source->Set(magick => 'RGB');
my $img = Image::Magick->new(size => "${width}x${height}", magick => 'RGB', depth => 8);
$img->BlobToImage($source->ImageToBlob);

my ($made, $rejected) = (0,0);

system("rm anim/*.png");

my (@img_lab, @target_lab);
for my $x (0 .. $width) {
  for my $y (0 .. $height) {
    $img_lab[$x][$y] = RGB_to_Lab([$img->getPixel(x => $x, y => $y)], 'sRGB');
    $target_lab[$x][$y] = RGB_to_Lab([$target->getPixel(x => $x, y => $y)], 'sRGB');
  }
}

my $n = 0;
my $frame = 0;
my $mode = 1;

while (1) {
  $n++;

  my $swap = 0;
  my ($x1, $x2, $y1, $y2) = (int rand $width, int rand $width, int rand $height, int rand $height);
  my ($dist, $dist_swapped);

  if ($mode == 1) {
    $dist = (sum map { ($img_lab[$x1][$y1][$_] - $target_lab[$x1][$y1][$_])**2 } 0..2)
          + (sum map { ($img_lab[$x2][$y2][$_] - $target_lab[$x2][$y2][$_])**2 } 0..2);

    $dist_swapped = (sum map { ($img_lab[$x2][$y2][$_] - $target_lab[$x1][$y1][$_])**2 } 0..2)
                  + (sum map { ($img_lab[$x1][$y1][$_] - $target_lab[$x2][$y2][$_])**2 } 0..2);

  } else { # dither mode
    my $xoffmin = ($x1 == 0 || $x2 == 0 ? 0 : -1);
    my $xoffmax = ($x1 == $width - 1 || $x2 == $width - 1 ? 0 : 1);
    my $yoffmin = ($y1 == 0 || $y2 == 0 ? 0 : -1);
    my $yoffmax = ($y1 == $height - 1 || $y2 == $height - 1 ? 0 : 1);

    my (@img1, @img2, @target1, @target2, $points);
    for my $xoff ($xoffmin .. $xoffmax) {
      for my $yoff ($yoffmin .. $yoffmax) {
        $points++;
        for my $chan (0 .. 2) {
          $img1[$chan] += $img_lab[$x1+$xoff][$y1+$yoff][$chan];
          $img2[$chan] += $img_lab[$x2+$xoff][$y2+$yoff][$chan];
          $target1[$chan] += $target_lab[$x1+$xoff][$y1+$yoff][$chan];
          $target2[$chan] += $target_lab[$x2+$xoff][$y2+$yoff][$chan];
        }
      }
    }

    my @img1s = @img1;
    my @img2s = @img2;
    for my $chan (0 .. 2) {
      $img1[$chan] += $img_lab[$x1][$y1][$chan] * ($detail - 1);
      $img2[$chan] += $img_lab[$x2][$y2][$chan] * ($detail - 1);

      $target1[$chan] += $target_lab[$x1][$y1][$chan] * ($detail - 1);
      $target2[$chan] += $target_lab[$x2][$y2][$chan] * ($detail - 1);

      $img1s[$chan] += $img_lab[$x2][$y2][$chan] * $detail - $img_lab[$x1][$y1][$chan];
      $img2s[$chan] += $img_lab[$x1][$y1][$chan] * $detail - $img_lab[$x2][$y2][$chan];
    }

    $dist = (sum map { ($img1[$_] - $target1[$_])**2 } 0..2)
          + (sum map { ($img2[$_] - $target2[$_])**2 } 0..2);

    $dist_swapped = (sum map { ($img1s[$_] - $target1[$_])**2 } 0..2)
                  + (sum map { ($img2s[$_] - $target2[$_])**2 } 0..2);

  }

  if ($dist_swapped < $dist) {
    my @pix1 = $img->GetPixel(x => $x1, y => $y1);
    my @pix2 = $img->GetPixel(x => $x2, y => $y2);
    $img->SetPixel(x => $x1, y => $y1, color => \@pix2);
    $img->SetPixel(x => $x2, y => $y2, color => \@pix1);
    ($img_lab[$x1][$y1], $img_lab[$x2][$y2]) = ($img_lab[$x2][$y2], $img_lab[$x1][$y1]);
    $made ++;
  } else {
    $rejected ++;
  }

  if ($n % 50000 == 0) {
#    print "Made: $made Rejected: $rejected\n";
    $img->Write('png:out.png');
    system("cp", "out.png", sprintf("anim/frame%05d.png", $frame++));
    if ($mode == 1 and $made < $limit1) {
      $mode = 2;
      system("cp", "out.png", "nodither.png");
    } elsif ($mode == 2 and $made < $limit2) {
      last;
    }
    ($made, $rejected) = (0, 0);
  }
}

Results

American Gothic palette

Little difference here with dithering or not.

Mona Lisa palette

Dithering reduces the banding on the spheres, but isn't especially pretty.

Starry Night palette

Mona Lisa retains a bit more detail with dithering. Spheres is about the same situation as last time.

Scream palette

Starry Night without dithering is the most awesome thing ever. Dithering makes it more photo-accurate, but far less interesting.

Spheres palette

As aditsu says, the true test. I think I pass.

Dithering helps immensely with American Gothic and Mona Lisa, mixing some grays and other colors in with the more intense pixels to produce semi-accurate skin tones instead of horrible blotches. The Scream is affected far less.

Camaro - Mustang

Source images from flawr's post.

Camaro:

Mustang:

Camaro palette

Looks pretty good without dither.

A "tight" dither (same detail factor as above) doesn't change much, just adds a little detail in the highlights on the hood and roof.

A "loose" dither (detail factor dropped to 6) really smooths out the tonality, and a lot more detail is visible through the windshield, but ditherng patterns are more obvious everywhere.

Mustang palette

Parts of the car look great, but the gray pixels look glitchy. What's worse, all the darker yellow pixels got distributed over the red Camaro body, and the non-dithering algorithm can't find anything to do with the lighter ones (moving them into the car would make the match worse, and moving them to another spot on the background makes no net difference), so there's a ghost-Mustang in the background.

Dithering is able to spread those extra yellow pixels around so they don't touch, scattering them more-or-less evenly over the background in the process. The highlights and shadows on the car look a little better.

Again, the loose dither has the evenest tonality, reveals more detail on the headlights and windshield. The car almost looks red again. background is clumpier for some reason. Not sure if I like it.

Video

(HQ Link)

hobbs

Posted 2014-07-09T04:03:39.377

Reputation: 2 403

I like it how in some cases the dithered image is better looking (Mona Lisa with Starry Night) and in some worse looking (Starry Night with Scream). I guess it all depends upon the amount of colour surfaces within the image - if there are clear swatches of colour the end result will look bad with dithering. I also find interesting that with Mustan g / Camaro example you can still see the outline of Mustang. Overall - very nice! – eithed – 2014-07-11T11:24:29.930

3

I really like this one, the heavily dithered images have a wonderfully pointillist feel. Seurat does Mona Lisa anyone?

– Boris the Spider – 2014-07-11T12:55:40.220

2Your algorithm definitely does a great job with the horrible Spheres pallette, good work! – Snowbody – 2014-07-11T13:47:10.377

1@hobbs Fantastic use of the rainbow palette, and your cars are nearly perfect! Would it be alright if I used some of your images in a YouTube video to showcase my animation script? – Calvin's Hobbies – 2014-07-11T19:18:30.207

@Calvin'sHobbies feel free :) – hobbs – 2014-07-11T19:46:28.990

@hobbs I was surprised to see 1. anyone also trying the thing with the cars 2. the cars so perfectly reconstructed, congrats!!! – flawr – 2014-07-12T21:43:00.477

That's the best answer so far. But did you tried different color spaces? I used a variation of your algorithm with Y-Cr-Cb and my pictures looks better (to me) – edc65 – 2014-07-13T10:27:11.327

1I think the only reason your dithering gives that pattern is because you are using a 3x3 block of pixels with the weight only changed for the centre. If you weighted the pixels according to distance from the centre (so the corner pixels contribute less than the 4 adjacent ones) and possibly extended to slightly more pixels then the dithering should be less noticeable. It's already such a great improvement for the rainbow palette so might be worth seeing what more it can do... – trichoplax – 2014-07-14T23:11:51.533

1

@githubphagocyte I spent half a day trying stuff like that, but none of it worked out how I wanted. One variant produced a very nice random-looking dither, but also gave me an optimization phase that never terminated. Other variants either had worse artifacting or too-strong dithering. My C solution has better dithering thanks to ImageMagick's spline interpolation, though. It's a cubic spline so I think it's using a 5x5 neighborhood.

– hobbs – 2014-07-14T23:28:06.533

1I saw your C solution after writing the comment (reading through in order...). Yes the dithering is significantly better there - again the smoothing out of the rainbow spheres palette is impressive. – trichoplax – 2014-07-15T00:14:23.993

With the spheres palette, it’s like a Wayne Thibaud painting. – Bradd Szonye – 2014-07-19T01:29:44.650

79

Python

The idea is simple: Every pixel has a point in the 3D RGB space. The goal is matching each a pixel of the source and one of the destination image, preferably they should be 'close' (represent the 'same' colour). Since they can be distributed in pretty different ways, we cannot just match the nearest neighbour.

Strategy

Let n be an integer (small, 3-255 or so). Now the the pixelcloud in the RGB space gets sorted by the first axis (R). This set of pixel is now partitioned into n partitions. Each of the partitions is now sorted along the second axis (B) which again is sorted an the same way partitioned. We do this with both pictures, and have now for both an array of the points. Now we just can match the pixels by their position in the array, sice a pixel in the same position in each array has a similar position relative to each pixelcloud in the RGB space.

If the distribution of the pixels in the RGB space of the both images similar (means only shifted and/or stretched along the 3 axis) the result will be pretty predictable. If the distributions look completely different, this algorithm will not produce as good results (as seen by the last example) but this is also one of the harder cases I think. What it does not do is using effects of interaction of neighbouring pixels in perception.

Code

Disclaimer: I am an absolute newbie to python.

from PIL import Image

n = 5 #number of partitions per channel.

src_index = 3 #index of source image
dst_index = 2 #index of destination image

images =  ["img0.bmp","img1.bmp","img2.bmp","img3.bmp"];
src_handle = Image.open(images[src_index])
dst_handle = Image.open(images[dst_index])
src = src_handle.load()
dst = dst_handle.load()
assert src_handle.size[0]*src_handle.size[1] == dst_handle.size[0]*dst_handle.size[1],"images must be same size"

def makePixelList(img):
    l = []
    for x in range(img.size[0]):
        for y in range(img.size[1]):
            l.append((x,y))
    return l

lsrc = makePixelList(src_handle)
ldst = makePixelList(dst_handle)

def sortAndDivide(coordlist,pixelimage,channel): #core
    global src,dst,n
    retlist = []
    #sort
    coordlist.sort(key=lambda t: pixelimage[t][channel])
    #divide
    partitionLength = int(len(coordlist)/n)
    if partitionLength <= 0:
        partitionLength = 1
    if channel < 2:
        for i in range(0,len(coordlist),partitionLength):
            retlist += sortAndDivide(coordlist[i:i+partitionLength],pixelimage,channel+1)
    else:
        retlist += coordlist
    return retlist

print(src[lsrc[0]])

lsrc = sortAndDivide(lsrc,src,0)
ldst = sortAndDivide(ldst,dst,0)

for i in range(len(ldst)):
    dst[ldst[i]] = src[lsrc[i]]

dst_handle.save("exchange"+str(src_index)+str(dst_index)+".png")

Result

I think it turned out not bad considering the simple solution. You can of course get better results when fiddling around with the parameter, or first transforming the colors into another color space, or even optimizing the partitioning.

comparision of my results

Full Gallery here: https://imgur.com/a/hzaAm#6

Detailed for River

monalisa > river

monalisa>river

people > river

people>river

balls > river

balls>river

starry night > river

nocturne>river

the cry > river

thecry>river

balls > MonaLisa, varying n = 2,4,6,...,20

This was the most challenging task I think, far from nice pictures, here a gif (had to be reduced to 256 colours) of the differen parameter values n = 2,4,6,...,20. To me it was suprprising that very low values produced better images (when looking at the face of Mme. Lisa): balls >monalisa

Sorry I cannot stop

Which one do you like better? Chevy Camaro or Ford Mustang? Perhaps this technique could be improved and used for colouring bw pictures. Now here: first I cut the cars roughly out from the background by painting it white (in paint, not very professional...) and then used the python program in each direction.

Originals

original original

Recoloured

There are some artifacts, I think because the area of one car was slightly bigger than the other and because my artistic skills are pretty bad=) manipulated enter image description here

flawr

Posted 2014-07-09T04:03:39.377

Reputation: 40 560

5Wow, I really love the Starry Night river, and how The Scream makes it look like a river of fire. – Calvin's Hobbies – 2014-07-09T20:46:27.030

@Calvin'sHobbies wow yeah! They almost look drawn, I didn't even look at them closely since I was busy uploading the new images=P But thank you for this great challenge! – flawr – 2014-07-09T20:49:36.560

3I love the car transformations. This might once become some kind of image editing transformation, really! – tomsmeding – 2014-07-10T05:47:26.477

@tomsmeding Thank you, I already thought about using the technique for the colorization of b/w images, but up to now with limited success. But perhaps we need some more ideas for getting this done=) – flawr – 2014-07-10T18:38:16.450

@flawr Would it be alright if I used some of your images in a YouTube video to showcase my animation script? – Calvin's Hobbies – 2014-07-11T18:54:45.603

@Calvin'sHobbies Of course! I'd be interested to see the video! – flawr – 2014-07-12T21:40:12.497

48

Python - A theoretically optimal solution

I say theoretically optimal because the truly optimal solution is not quite feasible to compute. I start off by describing the theoretical solution, and then explain how I tweaked it to make it computationally feasible in both space and time.

I consider the most optimal solution as the one yielding the lowest total error across all pixels between the old and new images. The error between two pixels is defined as the Euclidian distance between the points in 3D space where each color value (R, G, B) is a coordinate. In practice, because of how humans see things, the optimal solution may very well not be the best looking solution. However, it appears to do fairly well in all cases.

To compute the mapping, I considered this as a minimum weight bipartite matching problem. In other words, there are two sets of nodes: the original pixels and the palette pixels. An edge is created between each pixel across the two sets (but no edges are created within a set). The cost, or weight, of an edge is the Euclidian distance between the two pixels, as described above. The closer two colors are visually, the lower the cost between the pixels.

Bipartite Matching Example

This creates a cost matrix of size N2. For these images where N = 123520, approximately 40 GB of memory is required to represent the costs as integers, and half that as short integers. Either way, I did not have enough memory on my machine to make an attempt. Another issue is that the Hungarian algorithm, or the Jonker-Volgenant algorithm, which can be used to solve this problem, run in N3 time. While definitely computable, generating a solution per image would have likely taken hours or days.

To get around this issue, I randomly sort both lists of pixels, split the lists into C chunks, run a C++ implementation of the Jonker-Volgenant algorithm on each sublist pair, and then join the lists back to create the final mapping. Therefore, the code below would allow one to find the truly optimal solution provided that they to set the chunk size C to 1 (no chunking) and have enough memory. For these images, I set C to be 16, so that N becomes 7720, taking just a few minutes per image.

A simple way to think of why this works is that randomly sorting the list of pixels and then taking a subset is like sampling the image. So by setting C = 16, it's like taking 16 distinct random samples of size N/C from both the original and palette. Granted, there are probably better ways of splitting the lists, but a random approach provides decent results.

import subprocess
import multiprocessing as mp
import sys
import os
import sge
from random import shuffle
from PIL import Image
import numpy as np
import LAPJV
import pdb

def getError(p1, p2):
    return (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2 + (p1[2]-p2[2])**2

def getCostMatrix(pallete_list, source_list):
    num_pixels = len(pallete_list)
    matrix = np.zeros((num_pixels, num_pixels))

    for i in range(num_pixels):
        for j in range(num_pixels):
            matrix[i][j] = getError(pallete_list[i], source_list[j])

    return matrix

def chunks(l, n):
    if n < 1:
        n = 1
    return [l[i:i + n] for i in range(0, len(l), n)]

def imageToColorList(img_file):
    i = Image.open(img_file)

    pixels = i.load()
    width, height = i.size

    all_pixels = []
    for x in range(width):
        for y in range(height):
            pixel = pixels[x, y]
            all_pixels.append(pixel)

    return all_pixels

def colorListToImage(color_list, old_img_file, new_img_file, mapping):
    i = Image.open(old_img_file)

    pixels = i.load()
    width, height = i.size
    idx = 0

    for x in range(width):
        for y in range(height):
            pixels[x, y] = color_list[mapping[idx]]
            idx += 1

    i.save(new_img_file)

def getMapping(pallete_list, source_list):
    matrix = getCostMatrix(source_list, pallete_list)
    result = LAPJV.lap(matrix)[1]
    ret = []
    for i in range(len(pallete_list)):
        ret.append(result[i])
    return ret

def randomizeList(l):
    rdm_l = list(l)
    shuffle(rdm_l)
    return rdm_l

def getPartialMapping(zipped_chunk):
    pallete_chunk = zipped_chunk[0]
    source_chunk = zipped_chunk[1]
    subl_pallete = map(lambda v: v[1], pallete_chunk)
    subl_source = map(lambda v: v[1], source_chunk)
    mapping = getMapping(subl_pallete, subl_source)
    return mapping

def getMappingWithPartitions(pallete_list, source_list, C = 1):
    rdm_pallete = randomizeList(enumerate(pallete_list))
    rdm_source = randomizeList(enumerate(source_list))
    num_pixels = len(rdm_pallete)
    real_mapping = [0] * num_pixels

    chunk_size = int(num_pixels / C)

    chunked_rdm_pallete = chunks(rdm_pallete, chunk_size)
    chunked_rdm_source = chunks(rdm_source, chunk_size)
    zipped_chunks = zip(chunked_rdm_pallete, chunked_rdm_source)

    pool = mp.Pool(2)
    mappings = pool.map(getPartialMapping, zipped_chunks)

    for mapping, zipped_chunk in zip(mappings, zipped_chunks):
        pallete_chunk = zipped_chunk[0]
        source_chunk = zipped_chunk[1]
        for idx1,idx2 in enumerate(mapping):
            src_px = source_chunk[idx1]
            pal_px = pallete_chunk[idx2]
            real_mapping[src_px[0]] = pal_px[0]

    return real_mapping

def run(pallete_name, source_name, output_name):
    print("Getting Colors...")
    pallete_list = imageToColorList(pallete_name)
    source_list = imageToColorList(source_name)

    print("Creating Mapping...")
    mapping = getMappingWithPartitions(pallete_list, source_list, C = 16)

    print("Generating Image...");
    colorListToImage(pallete_list, source_name, output_name, mapping)

if __name__ == '__main__':
    pallete_name = sys.argv[1]
    source_name = sys.argv[2]
    output_name = sys.argv[3]
    run(pallete_name, source_name, output_name)

Results:

Like with aditsu's solution, these images were all generated using the exact same parameters. The only parameter here is C, which should be set as low as possible. For me, C = 16 was a good balance between speed and quality.

All images: http://imgur.com/a/RCZiX#0

American Gothic palette

mona-gothic scream-gothic

Mona Lisa palette

gothic-mona scream-mona

Starry Night palette

mona-night river-night

Scream palette

gothic-scream mona-scream

River palette

gothic-spheres mona-spheres

Spheres palette

gothic-spheres mona-spheres

WhatAWorld

Posted 2014-07-09T04:03:39.377

Reputation: 721

4I really like (Scream -> Starry night) and (Spheres -> Starry night). (Spheres -> Mona Lisa) isn't too bad either, but I'd like to see more dithering. – John Dvorak – 2014-07-13T04:22:10.957

Lol, I was thinking the same about the bipartite graph matching, but abandonded the idea because the N^3.. – RobAu – 2014-07-14T06:46:31.390

This "almost-deterministic" algorithm beats all of the deterministic ones IMO, and stands up with the good randomized ones. I like it. – hobbs – 2014-07-15T00:26:56.517

1I don't agree with your notion of an optimal solution. Why? Dithering can improve the perceptual quality (for humans) yet yield a lower score using your definition. Also using RGB over something like CIELUV is a mistake. – Thomas Eding – 2014-09-29T22:49:18.197

39

Python

Edit: Just realized that you can actually sharpen the source with ImageFilter to make the results more well-defined.

Rainbow -> Mona Lisa (sharpened Mona Lisa source, Luminance only)

enter image description here

Rainbow -> Mona Lisa (non-sharpened source, weighted with Y = 10, I = 10, Q = 0)

enter image description here

Mona Lisa -> American Gothic (non-sharpened source, Luminance only)

enter image description here

Mona Lisa -> American Gothic (non-sharpened source, weighted with Y = 1, I = 10, Q = 1)

enter image description here

River -> Rainbow (non-sharpened source, Luminance only)

enter image description here

Basically, it gets all the pixels from the two pictures into two lists.

Sort them with the luminance as the key. Y in YIQ represents the luminance.

Then, for each pixel in the source (which is in ascending luminance order) get the RGB value from pixel of the same index in the pallete list.

import Image, ImageFilter, colorsys

def getPixels(image):
    width, height = image.size
    pixels = []
    for x in range(width):
        for y in range(height):
            pixels.append([(x,y), image.getpixel((x,y))])
    return pixels

def yiq(pixel):
    # y is the luminance
    y,i,q = colorsys.rgb_to_yiq(pixel[1][0], pixel[1][6], pixel[1][7])
    # Change the weights accordingly to get different results
    return 10*y + 0*i + 0*q

# Open the images
source  = Image.open('ml.jpg')
pallete = Image.open('rainbow.png')

# Sharpen the source... It won't affect the palette anyway =D
source = source.filter(ImageFilter.SHARPEN)

# Sort the two lists by luminance
sourcePixels  = sorted(getPixels(source),  key=yiq)
palletePixels = sorted(getPixels(pallete), key=yiq)

copy = Image.new('RGB', source.size)

# Iterate through all the coordinates of source
# And set the new color
index = 0
for sourcePixel in sourcePixels:
    copy.putpixel(sourcePixel[0], palletePixels[index][8])
    index += 1

# Save the result
copy.save('copy.png')

To keep up with the trend of animations...

Pixels in the scream being quicksorted into starry night and vice-versa

enter image description here enter image description here

Vectorized

Posted 2014-07-09T04:03:39.377

Reputation: 3 486

2That simple idea works really well. I wonder if it can be extended and use weighted luminance, saturation and hue. (E.g. 10 *L + S + H ) to get better same area colour matching. – Moogie – 2014-07-09T11:55:49.713

1@bitpwnr Your images are not passing my script but that's almost certainly because you're using the slightly different jpegs I has up initially, so no big deal. However I was only able to run your code after replacing [6], [7], and [8] with [1], [2], and [1]. I'm getting the same images but that's a very unique typo :P – Calvin's Hobbies – 2014-07-09T17:45:52.887

Your images are very clear but kinda desaturated :p – aditsu quit because SE is EVIL – 2014-07-09T21:16:42.340

@Calvin'sHobbies Opps, corrected the typos. – Vectorized – 2014-07-10T01:51:17.370

@bitpwner Would it be alright if I used some of your images in a YouTube video to showcase my animation script? – Calvin's Hobbies – 2014-07-11T18:55:44.807

Sure. But I am pretty sure aditsu's pics are more museumesque than mine though. =) – Vectorized – 2014-07-12T07:13:51.350

@moogie I'd think that YIQ would already be pretty optimal; HSV isn't really that meaningful of a colorspace in the grand scheme of things, and YIQ does a great job of representing the perceptual distance between colors (not as perfectly as Lab but it's close enough). – fluffy – 2014-07-13T02:18:52.697

39

C# Winform - Visual Studio 2010

Edit Dithering added.

That's my version of random-swap algorithm - @hobbs flavour. I still feel that some sort of non-random dithering can do better...

Color elaboration in Y-Cb-Cr space (as in jpeg compression)

Two phase elaboration:

  1. Copy of pixel from source in luminance order. This already gives a good image, but desaturated - almost grey scale - in near 0 time
  2. Repeated random swap of pixels. The swap is done if this give a better delta (respect to the source) in the 3x3 cell containing the pixel. So it's a dithering effect. The delta is calculated on Y-Cr-Cb space with no weighting of the different components.

This is essentially the same method used by @hobbs, without the first random swap without dithering. Just, my times are shorter (the language counts?) and I think my images are better (probably the color space used is more accurate).

Program usage: put .png images in your c:\temp folder, check element in the list to choose the palette image, select element in the list to choose source image (not so user friendly). Click start button to start elaboration, saving is automatic (even when you prefer not - beware).

Elaboration time under 90 seconds.

Updated Results

Palette: American Gothic

Monna Lisa Rainbow River Scream Starry Night

Palette: Monna Lisa

American Gothic Rainbow River Scream Starry Night

Palette: Rainbow

American Gothic Monna Lisa River Scream Starry Night

Palette: River

American Gothic Monna Lisa Rainbow Scream Starry Night

Palette: Scream

American Gothic Monna Lisa Rainbow River Starry Night

Palette: Starry Night

American Gothic Monna Lisa Rainbow River Scream

Form1.cs

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Drawing.Imaging;
using System.IO;

namespace Palette
{
    public struct YRB
    {
        public int y, cb, cr;

        public YRB(int r, int g, int b)
        {
            y = (int)(0.299 * r + 0.587 * g + 0.114 * b);
            cb = (int)(128 - 0.168736 * r - 0.331264 * g + 0.5 * b);
            cr = (int)(128 + 0.5 * r - 0.418688 * g - 0.081312 * b);
        }
    }

    public struct Pixel
    {
        private const int ARGBAlphaShift = 24;
        private const int ARGBRedShift = 16;
        private const int ARGBGreenShift = 8;
        private const int ARGBBlueShift = 0;

        public int px, py;
        private uint _color;
        public YRB yrb;

        public Pixel(uint col, int px = 0, int py = 0)
        {
            this.px = px;
            this.py = py;
            this._color = col;
            yrb = new YRB((int)(col >> ARGBRedShift) & 255, (int)(col >> ARGBGreenShift) & 255, (int)(col >> ARGBBlueShift) & 255); 
        }

        public uint color
        {
            get { 
                return _color; 
            }
            set {
                _color = color;
                yrb = new YRB((int)(color >> ARGBRedShift) & 255, (int)(color >> ARGBGreenShift) & 255, (int)(color >> ARGBBlueShift) & 255);
            }
        }

        public int y
        {
            get { return yrb.y; }
        }
        public int cr
        {
            get { return yrb.cr; }
        }
        public int cb
        {
            get { return yrb.cb; }
        }
    }

    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }

        private void Form1_Load(object sender, EventArgs e)
        {
            DirectoryInfo di = new System.IO.DirectoryInfo(@"c:\temp\");
            foreach (FileInfo file in di.GetFiles("*.png"))
            {
                ListViewItem item = new ListViewItem(file.Name);
                item.SubItems.Add(file.FullName);
                lvFiles.Items.Add(item);
            }
        }

        private void lvFiles_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e)
        {
            if (e.IsSelected)
            {
                string file = e.Item.SubItems[1].Text;
                GetImagePB(pbSource, file);
                pbSource.Tag = file; 
                DupImage(pbSource, pbOutput);

                this.Width = pbOutput.Width + pbOutput.Left + 20;
                this.Height = Math.Max(pbOutput.Height, pbPalette.Height)+lvFiles.Height*2;   
            }
        }

        private void lvFiles_ItemCheck(object sender, ItemCheckEventArgs e)
        {
            foreach (ListViewItem item in lvFiles.CheckedItems)
            {
                if (item.Index != e.Index) item.Checked = false;
            }
            string file = lvFiles.Items[e.Index].SubItems[1].Text;
            GetImagePB(pbPalette, file);
            pbPalette.Tag = lvFiles.Items[e.Index].SubItems[0].Text; 

            this.Width = pbOutput.Width + pbOutput.Left + 20;
            this.Height = Math.Max(pbOutput.Height, pbPalette.Height) + lvFiles.Height * 2;   
        }

        Pixel[] Palette;
        Pixel[] Source;

        private void BtnStart_Click(object sender, EventArgs e)
        {
            lvFiles.Enabled = false;
            btnStart.Visible = false;
            progressBar.Visible = true; 
            DupImage(pbSource, pbOutput);

            Work(pbSource.Image as Bitmap, pbPalette.Image as Bitmap, pbOutput.Image as Bitmap);

            string newfile = (string)pbSource.Tag +"-"+ (string)pbPalette.Tag;
            pbOutput.Image.Save(newfile, ImageFormat.Png);   

            lvFiles.Enabled = true;
            btnStart.Visible = true;
            progressBar.Visible = false;
        }

        private void Work(Bitmap srcb, Bitmap palb, Bitmap outb)
        {
            GetData(srcb, out Source);
            GetData(palb, out Palette);

            FastBitmap fout = new FastBitmap(outb);
            FastBitmap fsrc = new FastBitmap(srcb);
            int pm = Source.Length;
            int w = outb.Width;
            int h = outb.Height;
            progressBar.Maximum = pm;

            fout.LockImage();
            for (int p = 0; p < pm; p++)
            {
                fout.SetPixel(Source[p].px, Source[p].py, Palette[p].color);
            }
            fout.UnlockImage();

            pbOutput.Refresh();

            var rnd = new Random();
            int totsw = 0;
            progressBar.Maximum = 200;
            for (int i = 0; i < 200; i++)
            {
                int nsw = 0;
                progressBar.Value = i;
                fout.LockImage();
                fsrc.LockImage();
                for (int j = 0; j < 200000; j++)
                {
                    nsw += CheckSwap(fsrc, fout, 1 + rnd.Next(w - 2), 1 + rnd.Next(h - 2), 1 + rnd.Next(w - 2), 1 + rnd.Next(h - 2));
                }
                totsw += nsw;
                lnCurSwap.Text = nsw.ToString();
                lnTotSwap.Text = totsw.ToString();
                fout.UnlockImage();
                fsrc.UnlockImage();
                pbOutput.Refresh();
                Application.DoEvents();
                if (nsw == 0)
                {
                    break;
                }
            }            
        }

        int CheckSwap(FastBitmap fsrc, FastBitmap fout, int x1, int y1, int x2, int y2)
        {
            const int fmax = 3;
            YRB ov1 = new YRB();
            YRB sv1 = new YRB();
            YRB ov2 = new YRB();
            YRB sv2 = new YRB();

            int f;
            for (int dx = -1; dx <= 1; dx++)
            {
                for (int dy = -1; dy <= 1; dy++)
                {
                    f = (fmax - Math.Abs(dx) - Math.Abs(dy));
                    {
                        Pixel o1 = new Pixel(fout.GetPixel(x1 + dx, y1 + dy));
                        ov1.y += o1.y * f;
                        ov1.cb += o1.cr * f;
                        ov1.cr += o1.cb * f;

                        Pixel s1 = new Pixel(fsrc.GetPixel(x1 + dx, y1 + dy));
                        sv1.y += s1.y * f;
                        sv1.cb += s1.cr * f;
                        sv1.cr += s1.cb * f;

                        Pixel o2 = new Pixel(fout.GetPixel(x2 + dx, y2 + dy));
                        ov2.y += o2.y * f;
                        ov2.cb += o2.cr * f;
                        ov2.cr += o2.cb * f;

                        Pixel s2 = new Pixel(fsrc.GetPixel(x2 + dx, y2 + dy));
                        sv2.y += s2.y * f;
                        sv2.cb += s2.cr * f;
                        sv2.cr += s2.cb * f;
                    }
                }
            }
            YRB ox1 = ov1;
            YRB ox2 = ov2;
            Pixel oc1 = new Pixel(fout.GetPixel(x1, y1));
            Pixel oc2 = new Pixel(fout.GetPixel(x2, y2));
            ox1.y += fmax * oc2.y - fmax * oc1.y;
            ox1.cb += fmax * oc2.cr - fmax * oc1.cr;
            ox1.cr += fmax * oc2.cb - fmax * oc1.cb;
            ox2.y += fmax * oc1.y - fmax * oc2.y;
            ox2.cb += fmax  * oc1.cr - fmax * oc2.cr;
            ox2.cr += fmax * oc1.cb - fmax * oc2.cb;

            int curd = Delta(ov1, sv1, 1) + Delta(ov2, sv2, 1);
            int newd = Delta(ox1, sv1, 1) + Delta(ox2, sv2, 1);
            if (newd < curd)
            {
                fout.SetPixel(x1, y1, oc2.color);
                fout.SetPixel(x2, y2, oc1.color);
                return 1;
            }
            return 0;
        }

        int Delta(YRB p1, YRB p2, int sf)
        {
            int dy = (p1.y - p2.y);
            int dr = (p1.cr - p2.cr);
            int db = (p1.cb - p2.cb);

            return dy * dy * sf + dr * dr + db * db;
        }

        Bitmap GetData(Bitmap bmp, out Pixel[] Output)
        {
            FastBitmap fb = new FastBitmap(bmp);
            BitmapData bmpData = fb.LockImage(); 

            Output = new Pixel[bmp.Width * bmp.Height];

            int p = 0;
            for (int y = 0; y < bmp.Height; y++)
            {
                uint col = fb.GetPixel(0, y);
                Output[p++] = new Pixel(col, 0, y);

                for (int x = 1; x < bmp.Width; x++)
                {
                    col = fb.GetNextPixel();
                    Output[p++] = new Pixel(col, x, y);
                }
            }
            fb.UnlockImage(); // Unlock the bits.

            Array.Sort(Output, (a, b) => a.y - b.y);

            return bmp;
        }

        void DupImage(PictureBox s, PictureBox d)
        {
            if (d.Image != null)
                d.Image.Dispose();
            d.Image = new Bitmap(s.Image.Width, s.Image.Height);  
        }

        void GetImagePB(PictureBox pb, string file)
        {
            Bitmap bms = new Bitmap(file, false);
            Bitmap bmp = bms.Clone(new Rectangle(0, 0, bms.Width, bms.Height), PixelFormat.Format32bppArgb);
            bms.Dispose(); 
            if (pb.Image != null)
                pb.Image.Dispose();
            pb.Image = bmp;
        }
    }

    //Adapted from Visual C# Kicks - http://www.vcskicks.com/
    unsafe public class FastBitmap
    {
        private Bitmap workingBitmap = null;
        private int width = 0;
        private BitmapData bitmapData = null;
        private Byte* pBase = null;

        public FastBitmap(Bitmap inputBitmap)
        {
            workingBitmap = inputBitmap;
        }

        public BitmapData LockImage()
        {
            Rectangle bounds = new Rectangle(Point.Empty, workingBitmap.Size);

            width = (int)(bounds.Width * 4 + 3) & ~3;

            //Lock Image
            bitmapData = workingBitmap.LockBits(bounds, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
            pBase = (Byte*)bitmapData.Scan0.ToPointer();
            return bitmapData;
        }

        private uint* pixelData = null;

        public uint GetPixel(int x, int y)
        {
            pixelData = (uint*)(pBase + y * width + x * 4);
            return *pixelData;
        }

        public uint GetNextPixel()
        {
            return *++pixelData;
        }

        public void GetPixelArray(int x, int y, uint[] Values, int offset, int count)
        {
            pixelData = (uint*)(pBase + y * width + x * 4);
            while (count-- > 0)
            {
                Values[offset++] = *pixelData++;
            }
        }

        public void SetPixel(int x, int y, uint color)
        {
            pixelData = (uint*)(pBase + y * width + x * 4);
            *pixelData = color;
        }

        public void SetNextPixel(uint color)
        {
            *++pixelData = color;
        }

        public void UnlockImage()
        {
            workingBitmap.UnlockBits(bitmapData);
            bitmapData = null;
            pBase = null;
        }
    }

}

Form1.Designer.cs

namespace Palette
{
    partial class Form1
    {
        /// <summary>
        /// Variabile di progettazione necessaria.
        /// </summary>
        private System.ComponentModel.IContainer components = null;

        /// <summary>
        /// Liberare le risorse in uso.
        /// </summary>
        /// <param name="disposing">ha valore true se le risorse gestite devono essere eliminate, false in caso contrario.</param>
        protected override void Dispose(bool disposing)
        {
            if (disposing && (components != null))
            {
                components.Dispose();
            }
            base.Dispose(disposing);
        }

        #region Codice generato da Progettazione Windows Form

        /// <summary>
        /// Metodo necessario per il supporto della finestra di progettazione. Non modificare
        /// il contenuto del metodo con l'editor di codice.
        /// </summary>
        private void InitializeComponent()
        {
            this.components = new System.ComponentModel.Container();
            this.panel = new System.Windows.Forms.FlowLayoutPanel();
            this.pbSource = new System.Windows.Forms.PictureBox();
            this.pbPalette = new System.Windows.Forms.PictureBox();
            this.pbOutput = new System.Windows.Forms.PictureBox();
            this.btnStart = new System.Windows.Forms.Button();
            this.progressBar = new System.Windows.Forms.ProgressBar();
            this.imageList1 = new System.Windows.Forms.ImageList(this.components);
            this.lvFiles = new System.Windows.Forms.ListView();
            this.lnTotSwap = new System.Windows.Forms.Label();
            this.lnCurSwap = new System.Windows.Forms.Label();
            this.panel.SuspendLayout();
            ((System.ComponentModel.ISupportInitialize)(this.pbSource)).BeginInit();
            ((System.ComponentModel.ISupportInitialize)(this.pbPalette)).BeginInit();
            ((System.ComponentModel.ISupportInitialize)(this.pbOutput)).BeginInit();
            this.SuspendLayout();
            // 
            // panel
            // 
            this.panel.AutoScroll = true;
            this.panel.AutoSize = true;
            this.panel.Controls.Add(this.pbSource);
            this.panel.Controls.Add(this.pbPalette);
            this.panel.Controls.Add(this.pbOutput);
            this.panel.Dock = System.Windows.Forms.DockStyle.Top;
            this.panel.Location = new System.Drawing.Point(0, 0);
            this.panel.Name = "panel";
            this.panel.Size = new System.Drawing.Size(748, 266);
            this.panel.TabIndex = 3;
            this.panel.WrapContents = false;
            // 
            // pbSource
            // 
            this.pbSource.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
            this.pbSource.Location = new System.Drawing.Point(3, 3);
            this.pbSource.Name = "pbSource";
            this.pbSource.Size = new System.Drawing.Size(157, 260);
            this.pbSource.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
            this.pbSource.TabIndex = 1;
            this.pbSource.TabStop = false;
            // 
            // pbPalette
            // 
            this.pbPalette.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
            this.pbPalette.Location = new System.Drawing.Point(166, 3);
            this.pbPalette.Name = "pbPalette";
            this.pbPalette.Size = new System.Drawing.Size(172, 260);
            this.pbPalette.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
            this.pbPalette.TabIndex = 3;
            this.pbPalette.TabStop = false;
            // 
            // pbOutput
            // 
            this.pbOutput.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
            this.pbOutput.Location = new System.Drawing.Point(344, 3);
            this.pbOutput.Name = "pbOutput";
            this.pbOutput.Size = new System.Drawing.Size(172, 260);
            this.pbOutput.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
            this.pbOutput.TabIndex = 4;
            this.pbOutput.TabStop = false;
            // 
            // btnStart
            // 
            this.btnStart.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
            this.btnStart.Location = new System.Drawing.Point(669, 417);
            this.btnStart.Name = "btnStart";
            this.btnStart.Size = new System.Drawing.Size(79, 42);
            this.btnStart.TabIndex = 4;
            this.btnStart.Text = "Start";
            this.btnStart.UseVisualStyleBackColor = true;
            this.btnStart.Click += new System.EventHandler(this.BtnStart_Click);
            // 
            // progressBar
            // 
            this.progressBar.Dock = System.Windows.Forms.DockStyle.Bottom;
            this.progressBar.Location = new System.Drawing.Point(0, 465);
            this.progressBar.Name = "progressBar";
            this.progressBar.Size = new System.Drawing.Size(748, 16);
            this.progressBar.TabIndex = 5;
            // 
            // imageList1
            // 
            this.imageList1.ColorDepth = System.Windows.Forms.ColorDepth.Depth8Bit;
            this.imageList1.ImageSize = new System.Drawing.Size(16, 16);
            this.imageList1.TransparentColor = System.Drawing.Color.Transparent;
            // 
            // lvFiles
            // 
            this.lvFiles.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Left) 
            | System.Windows.Forms.AnchorStyles.Right)));
            this.lvFiles.CheckBoxes = true;
            this.lvFiles.HideSelection = false;
            this.lvFiles.Location = new System.Drawing.Point(12, 362);
            this.lvFiles.MultiSelect = false;
            this.lvFiles.Name = "lvFiles";
            this.lvFiles.Size = new System.Drawing.Size(651, 97);
            this.lvFiles.Sorting = System.Windows.Forms.SortOrder.Ascending;
            this.lvFiles.TabIndex = 7;
            this.lvFiles.UseCompatibleStateImageBehavior = false;
            this.lvFiles.View = System.Windows.Forms.View.List;
            this.lvFiles.ItemCheck += new System.Windows.Forms.ItemCheckEventHandler(this.lvFiles_ItemCheck);
            this.lvFiles.ItemSelectionChanged += new System.Windows.Forms.ListViewItemSelectionChangedEventHandler(this.lvFiles_ItemSelectionChanged);
            // 
            // lnTotSwap
            // 
            this.lnTotSwap.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
            this.lnTotSwap.Location = new System.Drawing.Point(669, 362);
            this.lnTotSwap.Name = "lnTotSwap";
            this.lnTotSwap.Size = new System.Drawing.Size(58, 14);
            this.lnTotSwap.TabIndex = 8;
            this.lnTotSwap.Text = "label1";
            // 
            // lnCurSwap
            // 
            this.lnCurSwap.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
            this.lnCurSwap.Location = new System.Drawing.Point(669, 385);
            this.lnCurSwap.Name = "lnCurSwap";
            this.lnCurSwap.Size = new System.Drawing.Size(58, 14);
            this.lnCurSwap.TabIndex = 9;
            this.lnCurSwap.Text = "label1";
            // 
            // Form1
            // 
            this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
            this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
            this.BackColor = System.Drawing.SystemColors.ControlDark;
            this.ClientSize = new System.Drawing.Size(748, 481);
            this.Controls.Add(this.lnCurSwap);
            this.Controls.Add(this.lnTotSwap);
            this.Controls.Add(this.lvFiles);
            this.Controls.Add(this.progressBar);
            this.Controls.Add(this.btnStart);
            this.Controls.Add(this.panel);
            this.Name = "Form1";
            this.Text = "Form1";
            this.Load += new System.EventHandler(this.Form1_Load);
            this.panel.ResumeLayout(false);
            this.panel.PerformLayout();
            ((System.ComponentModel.ISupportInitialize)(this.pbSource)).EndInit();
            ((System.ComponentModel.ISupportInitialize)(this.pbPalette)).EndInit();
            ((System.ComponentModel.ISupportInitialize)(this.pbOutput)).EndInit();
            this.ResumeLayout(false);
            this.PerformLayout();

        }

        #endregion

        private System.Windows.Forms.FlowLayoutPanel panel;
        private System.Windows.Forms.PictureBox pbSource;
        private System.Windows.Forms.PictureBox pbPalette;
        private System.Windows.Forms.PictureBox pbOutput;
        private System.Windows.Forms.Button btnStart;
        private System.Windows.Forms.ProgressBar progressBar;
        private System.Windows.Forms.ImageList imageList1;
        private System.Windows.Forms.ListView lvFiles;
        private System.Windows.Forms.Label lnTotSwap;
        private System.Windows.Forms.Label lnCurSwap;
    }
}

Program.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;

namespace Palette
{
    static class Program
    {
        /// <summary>
        /// Punto di ingresso principale dell'applicazione.
        /// </summary>
        [STAThread]
        static void Main()
        {
            Application.EnableVisualStyles();
            Application.SetCompatibleTextRenderingDefault(false);
            Application.Run(new Form1());
        }
    }
}

Check 'Unsafe code' in project property to compile.

edc65

Posted 2014-07-09T04:03:39.377

Reputation: 31 086

4IMO this one produces the best results – figgycity50 – 2014-07-14T07:17:09.363

9That's absolutely incredible with that awful rainbow palette. – Michael B – 2014-07-14T19:20:10.553

1Amazing, winner! – jjrv – 2014-07-17T19:07:04.333

25

JS

Just run on two image urls.

As a JS package you can run it by yourself in the browser. Provided are fiddles that play around with different settings. Please bear in mind that this fiddle: http://jsfiddle.net/eithe/J7jEk/ will be always up to date (contain all the settings). As this is growing (new options are added), I won't be updating all the previous fiddles.

Calls

  • f("string to image (palette)", "string to image", {object of options});
  • f([[palette pixel], [palette pixel], ..., "string to image", {object of options});

Options

  • algorithm: 'balanced', 'surrounding', 'reverse', 'hsv', 'yiq', 'lab'
  • speed: animation speed
  • movement: true - should the animation show movement from start to end position
  • surrounding: if 'surrounding' algorithm is selected this is the weight of surrounding that will be taken into account when calculating weight of the given pixel
  • h s v: if 'hsv' algorithm is selected these parameters control how much hue, saturation and value affect the weights
  • y i q: if 'qiv' algorithm is selected these parameters control how much y i q affect the weights
  • l a b: if 'lab' algorithm is selected these parameters control how much l a b affect the weights
  • noise: how much randomness will be added to weights
  • unique: should the pixels from palette be used only once (see: Photomosaics or: How Many Programmers Does it Take to Replace a Light Bulb?)
  • pixel_1 / pixel_2 {width,height}: size of the pixel (in pixels :D)

Gallery (for the showcases I'm always using Mona Lisa & American Gothic, unless other specified):

eithed

Posted 2014-07-09T04:03:39.377

Reputation: 1 229

The jsfiddle isn't working for me. – BobTheAwesome – 2015-03-03T14:05:11.213

@BobTheAwesome - as the jsfiddle doesn't have an option to upload images (or at least I don't know of it), I had to use a trick with COORS proxy. If you'll run the code locally (as, images on the same domain) then it will work. – eithed – 2015-03-04T15:35:22.263

The animation looks great! but your image is one pixel shorter than normal. – Calvin's Hobbies – 2014-07-09T22:02:54.083

@Calvin's Hobbies - Had to cut it in paint :P Probably that's where the difference comes from. Updated! – eithed – 2014-07-09T22:05:20.890

I like this one: http://jsfiddle.net/q865W/4/

– Justin – 2014-07-09T23:22:18.103

@Quincunx Cheers! With weighted version it works even better – eithed – 2014-07-09T23:31:30.740

Wow. 0_0 That's really good. http://jsfiddle.net/q865W/6/

– Justin – 2014-07-09T23:35:35.057

@Quincunx Thank you kindly :) – eithed – 2014-07-09T23:38:16.630

24

C, with Lab color space and improved dithering

Did I say I was done? I lied. I think the algorithm in my other solution is the best out there, but Perl just isn't fast enough for number crunching tasks, so I reimplemented my work in C. It now runs all of the images in this post, at a higher quality than the original at about 3 minutes per image, and slightly lower quality (0.5% level) runs in 20-30 seconds per image. Basically all of the work is done with ImageMagick, and the dithering is done using ImageMagick's cubic spline interpolation, which gives a better / less patterned result.

Code

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <unistd.h>
#include <wand/MagickWand.h>

#define ThrowWandException(wand) \
{ \
  char \
  *description; \
  \
  ExceptionType \
  severity; \
  \
  description=MagickGetException(wand,&severity); \
  (void) fprintf(stderr,"%s %s %lu %s\n",GetMagickModule(),description); \
  description=(char *) MagickRelinquishMemory(description); \
  abort(); \
  exit(-1); \
}

int width, height; /* Target image size */
MagickWand *source_wand, *target_wand, *img_wand, *target_lab_wand, *img_lab_wand;
PixelPacket *source_pixels, *target_pixels, *img_pixels, *target_lab_pixels, *img_lab_pixels;
Image *img, *img_lab, *target, *target_lab;
CacheView *img_lab_view, *target_lab_view;
ExceptionInfo *e;

MagickWand *load_image(const char *filename) {
  MagickWand *img = NewMagickWand();
  if (!MagickReadImage(img, filename)) {
    ThrowWandException(img);
  }
  return img;
}

PixelPacket *get_pixels(MagickWand *wand) {
  PixelPacket *ret = GetAuthenticPixels(
      GetImageFromMagickWand(wand), 0, 0,
      MagickGetImageWidth(wand), MagickGetImageHeight(wand), e);
  CatchException(e);
  return ret;
}

void sync_pixels(MagickWand *wand) {
  SyncAuthenticPixels(GetImageFromMagickWand(wand), e);
  CatchException(e);
}

MagickWand *transfer_pixels() {
  if (MagickGetImageWidth(source_wand) * MagickGetImageHeight(source_wand)
      != MagickGetImageWidth(target_wand) * MagickGetImageHeight(target_wand)) {
    perror("size mismtch");
  }

  MagickWand *img_wand = CloneMagickWand(target_wand);
  img_pixels = get_pixels(img_wand);
  memcpy(img_pixels, source_pixels, 
      MagickGetImageWidth(img_wand) * MagickGetImageHeight(img_wand) * sizeof(PixelPacket));

  sync_pixels(img_wand);
  return img_wand;
}

MagickWand *image_to_lab(MagickWand *img) {
  MagickWand *lab = CloneMagickWand(img);
  TransformImageColorspace(GetImageFromMagickWand(lab), LabColorspace);
  return lab;
}

int lab_distance(PixelPacket *a, PixelPacket *b) {
  int l_diff = (GetPixelL(a) - GetPixelL(b)) / 256,
      a_diff = (GetPixela(a) - GetPixela(b)) / 256,
      b_diff = (GetPixelb(a) - GetPixelb(b)) / 256;

  return (l_diff * l_diff + a_diff * a_diff + b_diff * b_diff);
}

int should_swap(int x1, int x2, int y1, int y2) {
  int dist = lab_distance(&img_lab_pixels[width * y1 + x1], &target_lab_pixels[width * y1 + x1])
           + lab_distance(&img_lab_pixels[width * y2 + x2], &target_lab_pixels[width * y2 + x2]);
  int swapped_dist = lab_distance(&img_lab_pixels[width * y2 + x2], &target_lab_pixels[width * y1 + x1])
                   + lab_distance(&img_lab_pixels[width * y1 + x1], &target_lab_pixels[width * y2 + x2]);

  return swapped_dist < dist;
}

void pixel_multiply_add(MagickPixelPacket *dest, PixelPacket *src, double mult) {
  dest->red += (double)GetPixelRed(src) * mult;
  dest->green += ((double)GetPixelGreen(src) - 32768) * mult;
  dest->blue += ((double)GetPixelBlue(src) - 32768) * mult;
}

#define min(x,y) (((x) < (y)) ? (x) : (y))
#define max(x,y) (((x) > (y)) ? (x) : (y))

double mpp_distance(MagickPixelPacket *a, MagickPixelPacket *b) {
  double l_diff = QuantumScale * (a->red - b->red),
         a_diff = QuantumScale * (a->green - b->green),
         b_diff = QuantumScale * (a->blue - b->blue);
  return (l_diff * l_diff + a_diff * a_diff + b_diff * b_diff);
}

void do_swap(PixelPacket *pix, int x1, int x2, int y1, int y2) {
  PixelPacket tmp = pix[width * y1 + x1];
  pix[width * y1 + x1] = pix[width * y2 + x2];
  pix[width * y2 + x2] = tmp;
}

int should_swap_dither(double detail, int x1, int x2, int y1, int y2) {
//  const InterpolatePixelMethod method = Average9InterpolatePixel;
  const InterpolatePixelMethod method = SplineInterpolatePixel;

  MagickPixelPacket img1, img2, img1s, img2s, target1, target2;
  GetMagickPixelPacket(img, &img1);
  GetMagickPixelPacket(img, &img2);
  GetMagickPixelPacket(img, &img1s);
  GetMagickPixelPacket(img, &img2s);
  GetMagickPixelPacket(target, &target1);
  GetMagickPixelPacket(target, &target2);
  InterpolateMagickPixelPacket(img, img_lab_view, method, x1, y1, &img1, e);
  InterpolateMagickPixelPacket(img, img_lab_view, method, x2, y2, &img2, e);
  InterpolateMagickPixelPacket(target, target_lab_view, method, x1, y1, &target1, e);
  InterpolateMagickPixelPacket(target, target_lab_view, method, x2, y2, &target2, e);
  do_swap(img_lab_pixels, x1, x2, y1, y2);
//  sync_pixels(img_wand);
  InterpolateMagickPixelPacket(img, img_lab_view, method, x1, y1, &img1s, e);
  InterpolateMagickPixelPacket(img, img_lab_view, method, x2, y2, &img2s, e);
  do_swap(img_lab_pixels, x1, x2, y1, y2);
//  sync_pixels(img_wand);

  pixel_multiply_add(&img1, &img_lab_pixels[width * y1 + x1], detail);
  pixel_multiply_add(&img2, &img_lab_pixels[width * y2 + x2], detail);
  pixel_multiply_add(&img1s, &img_lab_pixels[width * y2 + x2], detail);
  pixel_multiply_add(&img2s, &img_lab_pixels[width * y1 + x1], detail);
  pixel_multiply_add(&target1, &target_lab_pixels[width * y1 + x1], detail);
  pixel_multiply_add(&target2, &target_lab_pixels[width * y2 + x2], detail);

  double dist = mpp_distance(&img1, &target1)
              + mpp_distance(&img2, &target2);
  double swapped_dist = mpp_distance(&img1s, &target1)
                      + mpp_distance(&img2s, &target2);

  return swapped_dist + 1.0e-4 < dist;
}

int main(int argc, char *argv[]) {
  if (argc != 7) {
    fprintf(stderr, "Usage: %s source.png target.png dest nodither_pct dither_pct detail\n", argv[0]);
    return 1;
  }
  char *source_filename = argv[1];
  char *target_filename = argv[2];
  char *dest = argv[3];
  double nodither_pct = atof(argv[4]);
  double dither_pct = atof(argv[5]);
  double detail = atof(argv[6]) - 1;
  const int SWAPS_PER_LOOP = 1000000;
  int nodither_limit = ceil(SWAPS_PER_LOOP * nodither_pct / 100);
  int dither_limit = ceil(SWAPS_PER_LOOP * dither_pct / 100);
  int dither = 0, frame = 0;
  char outfile[256], cmdline[1024];
  sprintf(outfile, "out/%s.png", dest);

  MagickWandGenesis();
  e = AcquireExceptionInfo();
  source_wand = load_image(source_filename);
  source_pixels = get_pixels(source_wand);
  target_wand = load_image(target_filename);
  target_pixels = get_pixels(target_wand);
  img_wand = transfer_pixels();
  img_pixels = get_pixels(img_wand);
  target_lab_wand = image_to_lab(target_wand);
  target_lab_pixels = get_pixels(target_lab_wand);
  img_lab_wand = image_to_lab(img_wand);
  img_lab_pixels = get_pixels(img_lab_wand);
  img = GetImageFromMagickWand(img_lab_wand);
  target = GetImageFromMagickWand(target_lab_wand);
  img_lab_view = AcquireAuthenticCacheView(img, e);
  target_lab_view = AcquireAuthenticCacheView(target,e);
  CatchException(e);

  width = MagickGetImageWidth(img_wand);
  height = MagickGetImageHeight(img_wand);

  while (1) {
    int swaps_made = 0;
    for (int n = 0 ; n < SWAPS_PER_LOOP ; n++) {
      int x1 = rand() % width,
          x2 = rand() % width,
          y1 = rand() % height,
          y2 = rand() % height;

      int swap = dither ?
        should_swap_dither(detail, x1, x2, y1, y2)
        : should_swap(x1, x2, y1, y2);

      if (swap) {
        do_swap(img_pixels, x1, x2, y1, y2);
        do_swap(img_lab_pixels, x1, x2, y1, y2);
        swaps_made ++;
      }
    }

    sync_pixels(img_wand);
    if (!MagickWriteImages(img_wand, outfile, MagickTrue)) {
      ThrowWandException(img_wand);
    }
    img_pixels = get_pixels(img_wand);
    sprintf(cmdline, "cp out/%s.png anim/%s/%05i.png", dest, dest, frame++);
    system(cmdline);

    if (!dither && swaps_made < nodither_limit) {
      sprintf(cmdline, "cp out/%s.png out/%s-nodither.png", dest, dest);
      system(cmdline);
      dither = 1;
    } else if (dither && swaps_made < dither_limit)
      break;
  }

  return 0;
}

Compile with

gcc -std=gnu99 -O3 -march=native -ffast-math \
  -o transfer `pkg-config --cflags MagickWand` \
  transfer.c `pkg-config --libs MagickWand` -lm

Results

Mostly the same as the Perl version, just slightly better, but there are a few exceptions. Dithering is less noticeable in general. Scream -> Starry Night doesn't have the "flaming mountain" effect, and the Camaro looks less glitchy with the gray pixels. I think the Perl version's colorspace code has a bug with low-saturation pixels.

American Gothic palette

Mona Lisa palette

Starry Night palette

Scream palette

Spheres palette

Mustang (Camaro palette)

Camaro (Mustang palette)

hobbs

Posted 2014-07-09T04:03:39.377

Reputation: 2 403

Could you please post the values you used as nodither_pct, dither_pct and detail in this example? I am running your programme with different combinations, but for my images, they seem to be sub-optimal, and the palettes are close to the ones of yours, so... please? – Andreï Kostyrka – 2016-01-24T04:12:33.810

@AndreïKostyrka 0.1 0.1 1.6 are the values I used generating these images. – hobbs – 2016-01-24T05:09:08.640

@AndreïKostyrka 0.5 0.5 1.6 should result in almost as good quality with much faster speed. – hobbs – 2016-01-24T05:14:02.460

@hobbs Thank you very much! Sleep deprivation made me think that they must be integer, but the ceil does the necessary conversion, and the detail, viz. mult, uses doubles. Obviously 2 2 2 was less precise because 2% changes are far less subtle. Thank you, now I got it! Besides that, increasing SWAPS_PER_LOOP might also be a good idea for big images. – Andreï Kostyrka – 2016-01-24T05:54:40.087

@AndreïKostyrka yeah, there's a bit of dirtiness and confusion in my code because it's for a competition. I'm interested to see if you make anything interesting with it :) – hobbs – 2016-02-09T18:18:16.790

@hobbs Maybe in several years, when I impress a girl with two portraits of herself being transmuted one into another, if she marries me, all the kudos will go to you. This is the perfect tool to charm a young lady—to morph her into Mona Lisa. Your tool seems to produce the best result among all. So wait... just wait... – Andreï Kostyrka – 2016-03-03T16:27:27.410

Yes Sir, yours indeed is the best out there. Why in C it generate .5% worse? – RMalke – 2014-07-14T00:11:10.327

@RMalke It's only worse when he only lets it run for 20-30 seconds. – trlkly – 2014-07-18T02:01:43.960

23

HSL nearest value with error propagation and dithering

I've made minor adaptations to the code which I used for my AllRGB images. This is designed to process 16 megapixel images with reasonable time and memory constraints, so it uses some data structure classes which aren't in the standard library; however, I've omitted those because there's already a lot of code here and this is the interesting code.

For AllRGB I manually tune the wavelets which give priority to certain areas of the image; for this unguided usage, I'm picking one wavelet which assumes rule of thirds layout putting the main interest a third of the way down from the top.

American Gothic with palette from Mona Lisa Mona Lisa with palette from American Gothic

My favourite of the 36:

River with palette from Mona Lisa

Full Cartesian product of (image, palette)

package org.cheddarmonk.graphics;

import org.cheddarmonk.util.*;
import java.awt.Point;
import java.awt.image.*;
import java.io.File;
import java.util.Random;
import javax.imageio.ImageIO;

public class PaletteApproximator {
    public static void main(String[] args) throws Exception {
        // Adjust this to fine-tune for the areas which are most important.
        float[] waveletDefault = new float[] {0.5f, 0.333f, 0.5f, 0.5f, 1};

        generateAndSave(args[0], args[1], args[2], waveletDefault);
    }

    private static void generateAndSave(String paletteFile, String fileIn, String fileOut, float[]... wavelets) throws Exception {
        BufferedImage imgIn = ImageIO.read(new File(fileIn));
        int w = imgIn.getWidth(), h = imgIn.getHeight();

        int[] buf = new int[w * h];
        imgIn.getRGB(0, 0, w, h, buf, 0, w);

        SimpleOctTreeInt palette = loadPalette(paletteFile);
        generate(palette, buf, w, h, wavelets);

        // Masks for R, G, B, A.
        final int[] off = new int[]{0xff0000, 0xff00, 0xff, 0xff000000};
        // The corresponding colour model.
        ColorModel colourModel = ColorModel.getRGBdefault();
        DataBufferInt dbi = new DataBufferInt(buf, buf.length);
        Point origin = new Point(0, 0);
        WritableRaster raster = Raster.createPackedRaster(dbi, w, h, w, off, origin);
        BufferedImage imgOut = new BufferedImage(colourModel, raster, false, null);

        ImageIO.write(imgOut, "PNG", new File(fileOut));
    }

    private static SimpleOctTreeInt loadPalette(String paletteFile) throws Exception {
        BufferedImage img = ImageIO.read(new File(paletteFile));
        int w = img.getWidth(), h = img.getHeight();

        int[] buf = new int[w * h];
        img.getRGB(0, 0, w, h, buf, 0, w);

        // Parameters tuned for 4096x4096
        SimpleOctTreeInt octtree = new SimpleOctTreeInt(0, 1, 0, 1, 0, 1, 16, 12);
        for (int i = 0; i < buf.length; i++) {
            octtree.add(buf[i], transform(buf[i]));
        }

        return octtree;
    }

    private static void generate(SimpleOctTreeInt octtree, int[] buf, int w, int h, float[]... wavelets) {
        int m = w * h;

        LeanBinaryHeapInt indices = new LeanBinaryHeapInt();
        Random rnd = new Random();
        for (int i = 0; i < m; i++) {
            float x = (i % w) / (float)w, y = (i / w) / (float)w;

            float weight = 0;
            for (float[] wavelet : wavelets) {
                weight += wavelet[4] * Math.exp(-Math.pow((x - wavelet[0]) / wavelet[2], 2) - Math.pow((y - wavelet[1]) / wavelet[3], 2));
            }

            // Random element provides some kind of dither
            indices.insert(i, -weight + 0.2f * rnd.nextFloat());
        }

        // Error diffusion buffers.
        float[] errx = new float[m], erry = new float[m], errz = new float[m];

        for (int i = 0; i < m; i++) {
            int idx = indices.pop();
            int x = idx % w, y = idx / w;

            // TODO Bicubic interpolation? For the time being, prefer to scale the input image externally...
            float[] tr = transform(buf[x + w * y]);
            tr[0] += errx[idx]; tr[1] += erry[idx]; tr[2] += errz[idx];

            int pixel = octtree.nearestNeighbour(tr, 2);
            buf[x + y * w] = 0xff000000 | pixel;

            // Don't reuse pixels.
            float[] trPix = transform(pixel);
            boolean ok = octtree.remove(pixel, trPix);
            if (!ok) throw new IllegalStateException("Failed to remove from octtree");

            // Propagate error in 4 directions, not caring whether or not we've already done that pixel.
            // This will lose some error, but that might be a good thing.
            float dx = (tr[0] - trPix[0]) / 4, dy = (tr[1] - trPix[1]) / 4, dz = (tr[2] - trPix[2]) / 4;
            if (x > 0) {
                errx[idx - 1] += dx;
                erry[idx - 1] += dy;
                errz[idx - 1] += dz;
            }
            if (x < w - 1) {
                errx[idx + 1] += dx;
                erry[idx + 1] += dy;
                errz[idx + 1] += dz;
            }
            if (y > 0) {
                errx[idx - w] += dx;
                erry[idx - w] += dy;
                errz[idx - w] += dz;
            }
            if (y < h - 1) {
                errx[idx + w] += dx;
                erry[idx + w] += dy;
                errz[idx + w] += dz;
            }
        }
    }

    private static final float COS30 = (float)Math.sqrt(3) / 2;
    private static float[] transform(int rgb) {
        float r = ((rgb >> 16) & 0xff) / 255.f;
        float g = ((rgb >> 8) & 0xff) / 255.f;
        float b = (rgb & 0xff) / 255.f;

        // HSL cone coords
        float cmax = (r > g) ? r : g; if (b > cmax) cmax = b;
        float cmin = (r < g) ? r : g; if (b < cmin) cmin = b;
        float[] cone = new float[3];
        cone[0] = (cmax + cmin) / 2;
        cone[1] = 0.5f * (1 + r - (g + b) / 2);
        cone[2] = 0.5f * (1 + (g - b) * COS30);
        return cone;
    }
}

Peter Taylor

Posted 2014-07-09T04:03:39.377

Reputation: 41 901

22

Python

Not pretty codewise, nor by results.

from blist import blist
from PIL import Image
import random

def randpop(colors):
    j = random.randrange(len(colors))
    return colors.pop(j)

colors = blist(Image.open('in1.png').getdata())
random.shuffle(colors)
target = Image.open('in2.png')

out = target.copy()
data = list(list(i) for i in out.getdata())

assert len(data) == len(colors)

w, h = out.size

coords = []
for i in xrange(h):
    for j in xrange(w):
        coords.append((i, j))

# Adjust color balance
dsum = [sum(d[i] for d in data) for i in xrange(3)]
csum = [sum(c[i] for c in colors) for i in xrange(3)]
adjust = [(csum[i] - dsum[i]) // len(data) for i in xrange(3)]
for i, j in coords:
    for k in xrange(3):
        data[i*w + j][k] += adjust[k]

random.shuffle(coords)

# larger value here gives better results but take longer
choose = 100
threshold = 10

done = set()
while len(coords):
    if not len(coords) % 1000:
        print len(coords) // 1000
    i, j = coords.pop()
    ind = i*w + j
    done.add(ind)
    t = data[ind]
    dmin = 255*3
    kmin = 0
    choices = []
    while colors and len(choices) < choose:
        k = len(choices)
        choices.append(randpop(colors))
        c = choices[-1]
        d = sum(abs(t[l] - c[l]) for l in xrange(3))
        if d < dmin:
            dmin = d
            kmin = k
            if d < threshold:
                break
    c = choices.pop(kmin)
    data[ind] = c
    colors.extend(choices)

    # Push the error to nearby pixels for dithering
    if ind + 1 < len(data) and ind + 1 not in done:
        ind2 = ind + 1
    elif ind + w < len(data) and ind + w not in done:
        ind2 = ind + w
    elif ind > 0 and ind - 1 not in done:
        ind2 = ind - 1
    elif ind - w > 0 and ind - w not in done:
        ind2 = ind - w
    else:
        ind2 = None
    if ind2 is not None:
        for k in xrange(3):
            err = abs(t[k] - c[k])
            data[ind2][k] += err

out.putdata(data)
out.save('out.png')

Possible improvements:

  • smarter color correction?
  • better quality metric?
  • push the error to all surrounding pixels rather than one

Ugly (1->2): 1->2

A bit better (2->1): 2->1

Decent (2->3): 2->3

Like a bad raytracer (3->4): 3->4

Cheating – use all the good pixels on the upper half and claim paint ran out: 1->2

user23522

Posted 2014-07-09T04:03:39.377

Reputation: 221

3The last one is... an interesting idea. But still not upvoting. – John Dvorak – 2014-07-13T04:11:12.973

20

Python (using kd-tree and luminosity)

Nice challenge. I decided to go with a kd-tree approach. So the basic idea behind using a kd-tree approach is that it divides the colors and the luminosity according to their presence in the picture.

So for the kd-tree the first sort is based on the red. It splits all the colors in to two approximately equal groups of reds (light red and dark red). Next it splits these two partitions along the greens. Next blue and then luminosity and then red again. And so on until the tree has been built. In this approach I built a kd-tree for the source image and the destination image. After that I mapped the tree from the source to the destination and overwrite the color data of the destination file. All of the results are shown here.

Some examples:

Mona Lisa -> American Gothic

mona_lisa american gothic (mona_lisa style)

American Gothic -> Mona Lisa

american gothic mona_lisa (american gothic style)

Starry Night --> The Scream

starry night starry scream

The Scream --> Starry Night

scream screamy stars

Rainbow spheres

enter image description here mona lisa balls screamy balls

Here is short movie using @Calvin's Hobbies movie frame maker:

enter image description here

And now for the code :-)

from PIL import Image

""" Computation of hue, saturation, luminosity.
Based on http://stackoverflow.com/questions/3732046/how-do-you-get-the-hue-of-a-xxxxxx-colour
"""
def rgbToLsh(t):
    r = t[0]
    g = t[1]
    b = t[2]
    r /= 255.
    g /= 255.
    b /= 255.
    vmax = max([r, g, b])
    vmin = min([r, g, b]);
    h = s = l = (vmax + vmin) / 2.;

    if (vmax == vmin):
        h = s = 0.  # achromatic
    else:
        d = vmax - vmin;
        if l > 0.5:
            s = d / (2. - vmax - vmin)
        else:
            s = d / (vmax + vmin);
        if vmax == r:
            if g<b: 
                m = 6. 
            else: 
                m = 0. 
            h = (g - b) / d + m
        elif vmax == g: 
            h = (b - r) / d + 2.
        elif vmax == b: 
            h = (r - g) / d + 4.
        h /= 6.;
    return [l,s,h];



""" KDTree implementation.
Based on https://code.google.com/p/python-kdtree/ 
"""
__version__ = "1r11.1.2010"
__all__ = ["KDTree"]

def square_distance(pointA, pointB):
    # squared euclidean distance
    distance = 0
    dimensions = len(pointA) # assumes both points have the same dimensions
    for dimension in range(dimensions):
        distance += (pointA[dimension] - pointB[dimension])**2
    return distance

class KDTreeNode():
    def __init__(self, point, left, right):
        self.point = point
        self.left = left
        self.right = right

    def is_leaf(self):
        return (self.left == None and self.right == None)

class KDTreeNeighbours():
    """ Internal structure used in nearest-neighbours search.
    """
    def __init__(self, query_point, t):
        self.query_point = query_point
        self.t = t # neighbours wanted
        self.largest_distance = 0 # squared
        self.current_best = []

    def calculate_largest(self):
        if self.t >= len(self.current_best):
            self.largest_distance = self.current_best[-1][1]
        else:
            self.largest_distance = self.current_best[self.t-1][1]

    def add(self, point):
        sd = square_distance(point, self.query_point)
        # run through current_best, try to find appropriate place
        for i, e in enumerate(self.current_best):
            if i == self.t:
                return # enough neighbours, this one is farther, let's forget it
            if e[1] > sd:
                self.current_best.insert(i, [point, sd])
                self.calculate_largest()
                return
        # append it to the end otherwise
        self.current_best.append([point, sd])
        self.calculate_largest()

    def get_best(self):
        return [element[0] for element in self.current_best[:self.t]]



class KDTree():
    """ KDTree implementation.

        Example usage:

            from kdtree import KDTree

            data = <load data> # iterable of points (which are also iterable, same length)
            point = <the point of which neighbours we're looking for>

            tree = KDTree.construct_from_data(data)
            nearest = tree.query(point, t=4) # find nearest 4 points
    """

    def __init__(self, data):

        self.data_listing = []
        def build_kdtree(point_list, depth):

            # code based on wikipedia article: http://en.wikipedia.org/wiki/Kd-tree
            if not point_list:
                return None

            # select axis based on depth so that axis cycles through all valid values
            axis = depth % 4 #len(point_list[0]) # assumes all points have the same dimension

            # sort point list and choose median as pivot point,
            # TODO: better selection method, linear-time selection, distribution
            point_list.sort(key=lambda point: point[axis])
            median = len(point_list)/2 # choose median

            # create node and recursively construct subtrees
            node = KDTreeNode(point=point_list[median],
                              left=build_kdtree(point_list[0:median], depth+1),
                              right=build_kdtree(point_list[median+1:], depth+1))

            # add point to listing                   
            self.data_listing.append(point_list[median])
            return node

        self.root_node = build_kdtree(data, depth=0)

    @staticmethod
    def construct_from_data(data):
        tree = KDTree(data)
        return tree

    def query(self, query_point, t=1):
        statistics = {'nodes_visited': 0, 'far_search': 0, 'leafs_reached': 0}

        def nn_search(node, query_point, t, depth, best_neighbours):
            if node == None:
                return

            #statistics['nodes_visited'] += 1

            # if we have reached a leaf, let's add to current best neighbours,
            # (if it's better than the worst one or if there is not enough neighbours)
            if node.is_leaf():
                #statistics['leafs_reached'] += 1
                best_neighbours.add(node.point)
                return

            # this node is no leaf

            # select dimension for comparison (based on current depth)
            axis = depth % len(query_point)

            # figure out which subtree to search
            near_subtree = None # near subtree
            far_subtree = None # far subtree (perhaps we'll have to traverse it as well)

            # compare query_point and point of current node in selected dimension
            # and figure out which subtree is farther than the other
            if query_point[axis] < node.point[axis]:
                near_subtree = node.left
                far_subtree = node.right
            else:
                near_subtree = node.right
                far_subtree = node.left

            # recursively search through the tree until a leaf is found
            nn_search(near_subtree, query_point, t, depth+1, best_neighbours)

            # while unwinding the recursion, check if the current node
            # is closer to query point than the current best,
            # also, until t points have been found, search radius is infinity
            best_neighbours.add(node.point)

            # check whether there could be any points on the other side of the
            # splitting plane that are closer to the query point than the current best
            if (node.point[axis] - query_point[axis])**2 < best_neighbours.largest_distance:
                #statistics['far_search'] += 1
                nn_search(far_subtree, query_point, t, depth+1, best_neighbours)

            return

        # if there's no tree, there's no neighbors
        if self.root_node != None:
            neighbours = KDTreeNeighbours(query_point, t)
            nn_search(self.root_node, query_point, t, depth=0, best_neighbours=neighbours)
            result = neighbours.get_best()
        else:
            result = []

        #print statistics
        return result


#List of files: 
files = ['JXgho.png','N6IGO.png','c5jq1.png','itzIe.png','xPAwA.png','y2VZJ.png']

#Loop over source files 
for im_orig in range(len(files)):
    srch = Image.open(files[im_orig])   #Open file handle 
    src = srch.load();                  #Load file  

    # Build data structure (R,G,B,lum,xpos,ypos) for source file
    srcdata =  [(src[i,j][0],src[i,j][1],src[i,j][2],rgbToLsh(src[i,j])[0],i,j) \
                     for i in range(srch.size[0]) \
                     for j in range(srch.size[1])]  

    # Build kd-tree for source
    srctree = KDTree.construct_from_data(srcdata)

    for im in range(len(files)):
        desh = Image.open(files[im])
        des = desh.load();

        # Build data structure (R,G,B,lum,xpos,ypos) for destination file
        desdata =  [(des[i,j][0],des[i,j][1],des[i,j][2],rgbToLsh(des[i,j]),i,j) \
                     for i in range(desh.size[0]) \
                     for j in range(desh.size[1])]  

        # Build kd-tree for destination
        destree = KDTree.construct_from_data(desdata)

        # Switch file mode
        desh.mode = srch.mode
        for k in range(len(srcdata)):
            # Get locations from kd-tree sorted data
            i   = destree.data_listing[k][-2]
            j   = destree.data_listing[k][-1]
            i_s = srctree.data_listing[k][-2]
            j_s = srctree.data_listing[k][-1]

            # Overwrite original colors with colors from source file 
            des[i,j] = src[i_s,j_s]

        # Save to disk  
        desh.save(files[im_orig].replace('.','_'+`im`+'.'))

Willem

Posted 2014-07-09T04:03:39.377

Reputation: 1 528

I didn't notice this a year ago but it's quite good! – hobbs – 2015-07-29T14:36:01.283

16

Python

Just to keep the ball rolling, here's my own simple and painfully slow answer.

import Image

def countColors(image):
    colorCounts = {}
    for color in image.getdata():
        if color in colorCounts:
            colorCounts[color] += 1
        else:
            colorCounts[color] = 1
    return colorCounts

def colorDist(c1, c2):
    def ds(c1, c2, i):
        return (c1[i] - c2[i])**2
    return (ds(c1, c2, 0) + ds(c1, c2, 1) + ds(c1, c2, 2))**0.5

def findClosestColor(palette, color):
    closest = None
    minDist = (3*255**2)**0.5
    for c in palette:
        dist = colorDist(color, c)
        if dist < minDist:
            minDist = dist
            closest = c
    return closest

def removeColor(palette, color):
    if palette[color] == 1:
        del palette[color]
    else:
        palette[color] -= 1

def go(paletteFile, sourceFile):
    palette = countColors(Image.open(paletteFile).convert('RGB'))
    source = Image.open(sourceFile).convert('RGB')
    copy = Image.new('RGB', source.size)
    w, h = copy.size

    for x in range(w):
        for y in range(h):
            c = findClosestColor(palette, source.getpixel((x, y)))
            removeColor(palette, c)
            copy.putpixel((x, y), c)
        print x #print progress
    copy.save('copy.png')

#the respective file paths go here
go('../ag.png', '../r.png')

For each pixel in the source it looks for the unused pixel in the palette that is closest in the RGB color cube. It's basically the same as Quincunx's algorithm but with no randomness and a different color comparison function.

You can tell I move from left to right since the right side of the image has far less detail due to the depletion of similar colors.

River from American Gothic

River from American Gothic

Mona Lisa from Rainbow Spheres

Mona Lisa from Rainbow Spheres

Calvin's Hobbies

Posted 2014-07-09T04:03:39.377

Reputation: 84 000

1Mme. Lisa is a bit yellowish... – tomsmeding – 2014-07-10T05:48:35.643

4I really like transition in the River from American Gothic from left 'nice' to right 'abstract'=) – flawr – 2014-07-10T18:39:58.633

12

Haskell

I tried a few different approaches using nearest neighbor searches before settling on this solution (which was actually my first idea). I first convert the images' pixel formats to YCbCr and construct two lists containing their pixel data. Then, the lists are sorted giving precedence to luminance value. After that, I just replace the input image's sorted pixel list with the palette image's, and then resort it back to the original order and use it to construct a new image.

module Main where

import System.Environment    (getArgs)
import System.Exit           (exitSuccess, exitFailure)
import System.Console.GetOpt (getOpt, ArgOrder(..), OptDescr(..), ArgDescr(..))
import Data.List             (sortBy)

import Codec.Picture
import Codec.Picture.Types

import qualified Data.Vector as V

main :: IO ()
main = do
    (ioOpts, _) <- getArgs >>= getOpts
    opts        <- ioOpts
    image       <- loadImage $ imageFile opts
    palette     <- loadImage $ paletteFile opts
    case swapPalette image palette of
      Nothing -> do
          putStrLn "Error: image and palette dimensions do not match"
          exitFailure
      Just img ->
          writePng (outputFile opts) img

swapPalette :: Image PixelYCbCr8 -> Image PixelYCbCr8 -> Maybe (Image PixelRGB8)
swapPalette img pal
    | area1 == area2 =
        let cmpCr (_, (PixelYCbCr8 _ _ r1)) (_, (PixelYCbCr8 _ _ r2)) = r1 `compare` r2
            cmpCb (_, (PixelYCbCr8 _ c1 _)) (_, (PixelYCbCr8 _ c2 _)) = c1 `compare` c2
            cmpY  (_, (PixelYCbCr8 y1 _ _)) (_, (PixelYCbCr8 y2 _ _)) = y2 `compare` y1
            w       = imageWidth  img
            h       = imageHeight img
            imgData = sortBy cmpY $ sortBy cmpCr $ sortBy cmpCb $ zip [1 :: Int ..] $ getPixelList img
            palData = sortBy cmpY $ sortBy cmpCr $ sortBy cmpCb $ zip [1 :: Int ..] $ getPixelList pal
            newData = zipWith (\(n, _) (_, p) -> (n, p)) imgData palData
            pixData = map snd $ sortBy (\(n1, _) (n2, _) -> n1 `compare` n2) newData
            dataVec = V.reverse $ V.fromList pixData
        in  Just $ convertImage $ generateImage (lookupPixel dataVec w h) w h
    | otherwise = Nothing
    where area1 = (imageWidth img) * (imageHeight img)
          area2 = (imageWidth pal) * (imageHeight pal)

lookupPixel :: V.Vector PixelYCbCr8 -> Int -> Int -> Int -> Int -> PixelYCbCr8
lookupPixel vec w h x y = vec V.! i
    where i = flattenIndex w h x y

getPixelList :: Image PixelYCbCr8 -> [PixelYCbCr8]
getPixelList img = foldl (\ps (x, y) -> (pixelAt img x y):ps) [] coords
    where coords = [(x, y) | x <- [0..(imageWidth img) - 1], y <- [0..(imageHeight img) - 1]]

flattenIndex :: Int -> Int -> Int -> Int -> Int
flattenIndex _ h x y = y + (x * h)

-------------------------------------------------
-- Command Line Option Functions
-------------------------------------------------

getOpts :: [String] -> IO (IO Options, [String])
getOpts args = case getOpt Permute options args of
    (opts, nonOpts, []) -> return (foldl (>>=) (return defaultOptions) opts, nonOpts)
    (_, _, errs)        -> do
        putStrLn $ concat errs
        printUsage
        exitFailure

data Options = Options
  { imageFile   :: Maybe FilePath
  , paletteFile :: Maybe FilePath
  , outputFile  :: FilePath
  }

defaultOptions :: Options
defaultOptions = Options
  { imageFile   = Nothing
  , paletteFile = Nothing
  , outputFile  = "out.png"
  }

options :: [OptDescr (Options -> IO Options)]
options = [ Option ['i'] ["image"]   (ReqArg setImage   "FILE") "",
            Option ['p'] ["palette"] (ReqArg setPalette "FILE") "",
            Option ['o'] ["output"]  (ReqArg setOutput  "FILE") "",
            Option ['v'] ["version"] (NoArg showVersion)        "",
            Option ['h'] ["help"]    (NoArg exitPrintUsage)     ""]

setImage :: String -> Options -> IO Options
setImage image opts = return $ opts { imageFile = Just image }

setPalette :: String -> Options -> IO Options
setPalette palette opts = return $ opts { paletteFile = Just palette }

setOutput :: String -> Options -> IO Options
setOutput output opts = return $ opts { outputFile = output }

printUsage :: IO ()
printUsage = do
    putStrLn "Usage: repix [OPTION...] -i IMAGE -p PALETTE [-o OUTPUT]"
    putStrLn "Rearrange pixels in the palette file to closely resemble the given image."
    putStrLn ""
    putStrLn "-i, --image    specify the image to transform"
    putStrLn "-p, --palette  specify the image to use as the palette"
    putStrLn "-o, --output   specify the output image file"
    putStrLn ""
    putStrLn "-v, --version  display version information and exit"
    putStrLn "-h, --help     display this help and exit"

exitPrintUsage :: a -> IO Options
exitPrintUsage _ = do
    printUsage
    exitSuccess

showVersion :: a -> IO Options
showVersion _ = do
    putStrLn "Pixel Rearranger v0.1"
    exitSuccess

-------------------------------------------------
-- Image Loading Util Functions
-------------------------------------------------

loadImage :: Maybe FilePath -> IO (Image PixelYCbCr8)
loadImage Nothing     = do
    printUsage
    exitFailure
loadImage (Just path) = do
    rdImg <- readImage path
    case rdImg of
      Left err -> do
          putStrLn err
          exitFailure
      Right img -> getRGBImage img

getRGBImage :: DynamicImage -> IO (Image PixelYCbCr8)
getRGBImage dynImg =
    case dynImg of
      ImageYCbCr8 img -> return img
      ImageRGB8   img -> return $ convertImage img
      ImageY8     img -> return $ convertImage (promoteImage img :: Image PixelRGB8)
      ImageYA8    img -> return $ convertImage (promoteImage img :: Image PixelRGB8)
      ImageCMYK8  img -> return $ convertImage (convertImage img :: Image PixelRGB8)
      ImageRGBA8  img -> return $ convertImage (pixelMap dropTransparency img :: Image PixelRGB8)
      _               -> do
          putStrLn "Error: incompatible image type."
          exitFailure

Results

The images my program produces tend to be less vivid than many of the other solutions, and it doesn't deal with large solid areas or gradients well.

Here is a link to the full album.

American Gothic -> Mona Lisa

Mona Lisa -> American Gothic

Spheres -> Mona Lisa

The Scream -> Starry Night

The Scream -> Spheres

ChaseC

Posted 2014-07-09T04:03:39.377

Reputation: 366

3I like the dithering on (Spheres -> Mona Lisa) but where are those ugly artifacts on (Scream -> Spheres) from? – John Dvorak – 2014-07-13T04:14:36.613

1The artifacts are a side effect of how my algorithm sorts the pixels. Right now, the red-difference of each pixel has precedence over the blue-difference in the sorting step, which means that similar colors in the input image can be matched with very different colors from the palette image. However, I'm almost certain that this same effect is what causes the apparent dithering in images like the Spheres -> Mona Lisa, so I decided to keep it. – ChaseC – 2014-07-13T04:44:47.157

9

Java

Inspired by the previous java answer from Quincunx

     package paletteswap;

import java.awt.Point;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.BitSet;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

import javax.imageio.ImageIO;

public class Test
{
    public static class Bits
    {

        public static BitSet convert( int value )
        {
            BitSet bits = new BitSet();
            int index = 0;
            while ( value != 0L )
            {
                if ( value % 2 != 0 )
                {
                    bits.set( index );
                }
                ++index;
                value = value >>> 1;
            }
            return bits;
        }

        public static int convert( BitSet bits )
        {
            int value = 0;
            for ( int i = 0; i < bits.length(); ++i )
            {
                value += bits.get( i ) ? ( 1 << i ) : 0;
            }
            return value;
        }
    }

    public static void main( String[] args ) throws IOException
    {
        BufferedImage source = ImageIO.read( resource( "river.png" ) ); // My names
                                                                            // for the
                                                                            // files
        BufferedImage palette = ImageIO.read( resource( "farmer.png" ) );
        BufferedImage result = rearrange( source, palette );
        ImageIO.write( result, "png", resource( "result.png" ) );
    }

    public static BufferedImage rearrange( BufferedImage source, BufferedImage palette )
    {
        BufferedImage result = new BufferedImage( source.getWidth(), source.getHeight(), BufferedImage.TYPE_INT_RGB );

        // This creates a list of points in the Source image.
        // Then, we shuffle it and will draw points in that order.
        List<Point> samples = getPoints( source.getWidth(), source.getHeight() );
        Collections.sort( samples, new Comparator<Point>()
        {

            @Override
            public int compare( Point o1, Point o2 )
            {
                int c1 = getRGB( source, o1.x, o1.y );
                int c2 = getRGB( source, o2.x, o2.y );
                return c1 -c2;
            }
        } );

        // Create a list of colors in the palette.
        List<Integer> colors = getColors( palette );

        while ( !samples.isEmpty() )
        {
            Point currentPoint = samples.remove( 0 );
            int sourceAtPoint = getRGB( source, currentPoint.x, currentPoint.y );
            int colorIndex = binarySearch( colors, sourceAtPoint );
            int bestColor = colors.remove( colorIndex );
            setRGB( result, currentPoint.x, currentPoint.y, bestColor );
        }
        return result;
    }

    public static int unpack( int rgbPacked )
    {
        BitSet packed = Bits.convert( rgbPacked );
        BitSet rgb = Bits.convert( 0 );
        for (int i=0; i<8; i++)
        {
            rgb.set( i,    packed.get( i*3 )  );
            rgb.set( i+16,    packed.get( i*3+1 )  );
            rgb.set( i+8,    packed.get( i*3+2 )  );
        }
        return Bits.convert( rgb);
    }

    public static int pack( int rgb )
    {
        int myrgb = rgb & 0x00FFFFFF;

        BitSet bits = Bits.convert( myrgb );
        BitSet packed = Bits.convert( 0 );

        for (int i=0; i<8; i++)
        {
            packed.set( i*3,    bits.get( i )  );
            packed.set( i*3+1,  bits.get( i+16 )  );
            packed.set( i*3+2,  bits.get( i+8 )  );
        }
        return Bits.convert( packed);

    }

    public static int getRGB( BufferedImage image, int x, int y )
    {
        return pack( image.getRGB( x, y ) );
    }

    public static void setRGB( BufferedImage image, int x, int y, int color )
    {
        image.setRGB( x, y, unpack( color ) );
    }

    public static List<Point> getPoints( int width, int height )
    {
        List<Point> points = new ArrayList<>( width * height );
        for ( int x = 0; x < width; x++ )
        {
            for ( int y = 0; y < height; y++ )
            {
                points.add( new Point( x, y ) );
            }
        }
        return points;
    }

    public static List<Integer> getColors( BufferedImage img )
    {
        int width = img.getWidth();
        int height = img.getHeight();
        List<Integer> colors = new ArrayList<>( width * height );
        for ( int x = 0; x < width; x++ )
        {
            for ( int y = 0; y < height; y++ )
            {
                colors.add( getRGB( img, x, y ) );
            }
        }
        Collections.sort( colors );
        return colors;
    }

    public static int binarySearch( List<Integer> toSearch, int obj )
    {
        int index = toSearch.size() >> 1;
        for ( int guessChange = toSearch.size() >> 2; guessChange > 0; guessChange >>= 1 )
        {
            int value = toSearch.get( index );
            if ( obj == value )
            {
                return index;
            }
            else if ( obj < value )
            {
                index -= guessChange;
            }
            else
            {
                index += guessChange;
            }
        }
        return index;
    }

    public static File resource( String fileName )
    { // This method is here solely
        // for my ease of use (I put
        // the files under <Project
        // Name>/Resources/ )
        return new File( System.getProperty( "user.home" ) + "/pictureswap/" + fileName );
    }
}

Mona lisa -> Farmers

enter image description here

What is does it sorts the points that need to be replaced by they intensity, instead of random.

RobAu

Posted 2014-07-09T04:03:39.377

Reputation: 641

8

Ruby

Overview:

Really simple approach, but seems to get pretty good results:

  1. Take the palette and target, sort their pixels both by some function; call these the "reference" arrays. I've chosen to sort by HSLA, but preferring Luminance to Saturation to Hue (aka "LSHA")
  2. Make the output image by iterating over each pixel of the target image, finding where it gets sorted to in the target reference array, and taking the pixel from the palette which got sorted to the same index in the palette reference array.

Code:

require 'rubygems'
require 'chunky_png'
require 'rmagick' # just for the rgba => hsla converter, feel free to use something lighter-weight you have on hand

def pixel_array_for_image(image)
  # [r, b, g, a]
  image.pixels.map{|p| ChunkyPNG::Color.to_truecolor_alpha_bytes(p)}
end

def sorted_pixel_references(pixel_array)
  pixel_array.map{|a| yield(a)}.map.with_index.sort_by(&:first).map(&:last)
end

def sort_by_lsha(pixel_array)
  sorted_pixel_references(pixel_array) {|p|
    # feel free to drop in any sorting function you want here!
    hsla = Magick::Pixel.new(*p).to_hsla # [h, s, l, a]
    [hsla[2], hsla[1], hsla[0], hsla[3]]
  }
end

def make_target_out_of_palette(target_filename, palette_filename, output_filename)
  puts "making #{target_filename} out of #{palette_filename}"

  palette = ChunkyPNG::Image.from_file(palette_filename)
  target = ChunkyPNG::Image.from_file(target_filename)
  puts "  loaded images"

  palette_array = pixel_array_for_image(palette)
  target_array = pixel_array_for_image(target)
  puts "  have pixel arrays"

  palette_spr = sort_by_lsha(palette_array)
  target_spr = sort_by_lsha(target_array)
  puts "  have sorted-pixel reference arrays"

  output = ChunkyPNG::Image.new(target.dimension.width, target.dimension.height, ChunkyPNG::Color::TRANSPARENT)
  (0...target_array.count).each { |index|
    spr_index = target_spr.index(index)
    index_in_palette = palette_spr[spr_index]
    palette_pixel = palette_array[index_in_palette]
    index_as_x = (index % target.dimension.width)
    index_as_y = (index / target.dimension.width)
    output[index_as_x, index_as_y] = ChunkyPNG::Color.rgba(*palette_pixel)
  }
  output.save(output_filename)
  puts "  saved to #{output_filename}"
end

palette_filename, target_filename, output_filename = ARGV
make_target_out_of_palette(target_filename, palette_filename, output_filename)

Results:

http://imgur.com/a/Iu7Ds

Highlights:

Starry Night made from Scream American Gothic made from Mona Lisa Mona Lisa made from the river photo The river photo made from Starry Night

21echoes

Posted 2014-07-09T04:03:39.377

Reputation: 81

2Can you add the source palettes for each image? – PlasmaHH – 2014-07-11T15:11:51.610

7

Perl

Here is a rather simplistic approach. It takes about five seconds to generate 100 frames per image pair on my MacBook Pro with a memory footprint of about 120 MB.

The idea is to sort the pixels in both and palette images by 24-bit packed RGB, and replace colors in the source with colors from the palette sequentially.

#!/usr/bin/env perl

use 5.020; # just because
use strict;
use warnings;

use Const::Fast;
use GD;
GD::Image->trueColor(1);

use Path::Class;

const my $COLOR => 0;
const my $COORDINATES => 1;
const my $RGB => 2;
const my $ANIMATION_FRAMES => 100;

const my %MASK => (
    RED => 0x00ff0000,
    GREEN => 0x0000ff00,
    BLUE => 0x000000ff,
);

run(@ARGV);

sub run {
    unless (@_ == 2) {
        die "Need source and palette images\n";
    }
    my $source_file = file(shift)->resolve;
    my $palette_file = file(shift)->resolve;

    my $source = GD::Image->new("$source_file")
        or die "Failed to create source image from '$source_file'";
    my $palette = GD::Image->new("$palette_file")
        or die "Failed to create palette image from '$palette_file'";

    my %source =  map { $_ => $source->$_ } qw(width height);
    my %palette = map { $_ => $palette->$_ } qw(width height);
    my ($frame_prefix) = ($source_file->basename =~ /\A([^.]+)/);

    unless (
        (my $source_area = $source{width} * $source{height}) <=
        (my $palette_area = $palette{width} * $source{height})
    ) {
        die "Source area ($source_area) is greater than palette area ($palette_area)";
    }

    my ($last_frame, $png) = recreate_source_image_from_palette(
        \%source,
        get_source_pixels( get_pixels_by_color($source, \%source) ),
        get_palette_colors( get_pixels_by_color($palette, \%palette) ),
        sub { save_frame($frame_prefix, @_) }
    );

    save_frame($frame_prefix, $last_frame, $png);
    return;
}

sub save_frame {
    my $frame_prefix = shift;
    my $frame = shift;
    my $png = shift;
    file(
        sprintf("${frame_prefix}-%d.png", $frame)
    )->spew(iomode => '>:raw', $$png);
    return;
}

sub recreate_source_image_from_palette {
    my $dim = shift;
    my $source_pixels = shift;
    my $palette_colors = shift;
    my $callback = shift;
    my $frame = 0;

    my %colors;
    $colors{$_} = undef for @$palette_colors;

    my $gd = GD::Image->new($dim->{width}, $dim->{height}, 1);
    for my $x (keys %colors) {
          $colors{$x} = $gd->colorAllocate(unpack_rgb($x));
    }

    my $period = sprintf '%.0f', @$source_pixels / $ANIMATION_FRAMES;
    for my $i (0 .. $#$source_pixels) {
        $gd->setPixel(
            @{ $source_pixels->[$i] },
            $colors{ $palette_colors->[$i] }
        );
        if ($i % $period == 0) {
            $callback->($frame, \ $gd->png);
            $frame += 1;
        }
    }
    return ($frame, \ $gd->png);
}

sub get_palette_colors { [ map sprintf('%08X', $_->[$COLOR]), @{ $_[0] } ] }

sub get_source_pixels { [ map $_->[$COORDINATES], @{ $_[0] } ] }

sub get_pixels_by_color {
    my $gd = shift;
    my $dim = shift;
    return [
        sort { $a->[$COLOR] <=> $b->[$COLOR] }
        map {
            my $y = $_;
            map {
                [ pack_rgb( $gd->rgb( $gd->getPixel($_, $y) ) ), [$_, $y] ];
            } 0 .. $dim->{width}
        } 0 .. $dim->{height}
    ];
}

sub pack_rgb { $_[0] << 16 | $_[1] << 8 | $_[2] }

sub unpack_rgb {
    my ($r, $g, $b) = map $MASK{$_} & hex($_[0]), qw(RED GREEN BLUE);
    return ($r >> 16, $g >> 8, $b);
}

Output

Scream using Starry Night palette

Scream using Starry Night palette

American Gothic using Mona Lisa colors

American Gothic using Mona Lisa colors

Mona Lisa using Scream colors

Mona Lisa using Scream colors

River using Marbles colors

River using Marbles colors

Animations

I was lazy, so I put the animations on YouTube: Mona Lisa using colors from Starry Night and American Gothic using colors from Mona Lisa .

Sinan Ünür

Posted 2014-07-09T04:03:39.377

Reputation: 233

7

Python

Figured I'd take this little opportunity to take up code golf and use it as an excuse to work on my Python chops since it's been coming up more often at work these days. I ran through a couple of algorithms, including a few with O(n^2) and O(nlog(n)) time to try and optimize the colors, but it became very apparent that this was both computationally expensive and actually had very little effect on the apparent result. So below is a take I made on things that works in O(n) time (basically instantaneously on my system) that gets the most important visual element (luminance) as right as is reasonable and lets the chroma land where it may.

from PIL import Image
def check(palette, copy):
    palette = sorted(palette.getdata())
    copy = sorted(copy.getdata())
    print "Master says it's good!" if copy == palette else "The master disapproves."

def GetLuminance(pixel):
    # Extract the pixel channel data
    b, g, r = pixel
    # and used the standard luminance curve to get luminance.
    return .3*r+.59*g+.11*b

print "Putting pixels on the palette..."
# Open up the image and get all of the pixels out of it. (Memory intensive!)
palette = Image.open("2.png").convert(mode="RGB")

pixelsP = [] # Allocate the array
width,height = palette.size # Unpack the image size
for y in range(height): # Scan the lines
    for x in range(width): # within the line, scan the pixels
        curpixel = palette.getpixel((x,y)) # get the pixel
        pixelsP.append((GetLuminance(curpixel),curpixel)) # and add a (luminance, color) tuple to the array.


# sort the pixels by the calculated luminescence
pixelsP.sort()

print "Getting the reference picture..."
# Open up the image and get all of the pixels out of it. (Memory intensive!)
source = Image.open("6.png").convert(mode="RGB")
pixelsR = [] # Allocate the array
width,height = source.size # Unpack the image size
for y in range(height): # Scan the lines
    for x in range(width): # within the line, scan the pixels
        curpixel = source.getpixel((x,y)) # get the pixel
        pixelsR.append((GetLuminance(curpixel),(x,y))) # and add a (luminance, position) tuple

# Sort the Reference pixels by luminance too
pixelsR.sort()

# Now for the neat observation. Luminance matters more to humans than chromanance,
# given this then we want to match luminance as well as we can. However, we have
# a finite luminance distribution to work with. Since we can't change that, it's best
# just to line the two images up, sorted by luminance, and just simply assign the
# luminance directly. The chrominance will be all kinds of whack, but fixing that
# by way of loose sorting possible chrominance errors takes this algorithm from O(n)
# to O(n^2), which just takes forever (trust me, I've tried it.)

print "Painting reference with palette..."
for p in range(len(pixelsP)): # For each pixel in the palette
    pl,pixel = pixelsP[p] # Grab the pixel from the palette
    l,cord = pixelsR[p] # Grab the location from the reference
    source.putpixel(cord,pixel) # and assign the pallet pixel to the refrence pixels place

print "Applying fixative..."
# save out the result.
source.save("o.png","PNG")

print "Handing it to the master to see if he approves..."
check(palette, source)
print "Done!"

The end result has some neat consequences. However, if the images have wildly different luminance distributions, there's not much that can be done without getting advanced and dithering, which might be an interesting thing to do at some point, but is excluded here as a matter of brevity.

Everything -> Mona Lisa

American Gothic -> Mona Lisa Starry Night -> Mona Lisa Scream -> Mona Lisa River -> Mona Lisa Spheres -> Mona Lisa

Mona Lisa -> Spheres

Mona Lisa -> Spheres

JimTheCactus

Posted 2014-07-09T04:03:39.377

Reputation: 71

6

Mathematica - random permutations

Idea

Select two pixels in the source image and check if the error to the destination image would decrease if theses two pixels would be swapped. We add a small random number (-d|+d) to the result to avoid local minima. Repeat. For speed do this with 10000 pixel at once.

It is a bit like a Markov random chain. It would probably be good to decrease the randomness during the optimization process similar to simulated annealing.

Code

colorSpace = "RGB";
\[Delta] = 0.05;
ClearAll[loadImgur, imgToList, listToImg, improveN, err, rearrange, \
rearrangeImg]
loadImgur[tag_] := 
 RemoveAlphaChannel@
  Import["http://i.stack.imgur.com/" <> tag <> ".png"]
imgToList[img_] := Flatten[ImageData[ColorConvert[img, colorSpace]], 1]
listToImg[u_, w_] := Image[Partition[u, w], ColorSpace -> colorSpace]
err[{x_, y_, z_}] := x^2 + y^2 + z^2
improveN[a_, t_, n_] := Block[{i, j, ai, aj, ti, tj},
  {i, j} = Partition[RandomSample[Range@Length@a, 2 n], n];
  ai = a[[i]];
  aj = a[[j]];
  ti = t[[i]];
  tj = t[[j]];
  ReplacePart[
   a, (#1 -> #3) & @@@ 
    Select[Transpose[{i, 
       err /@ (ai - ti) + err /@ (aj - tj) - err /@ (ai - tj) - 
        err /@ (aj - ti) + RandomReal[\[Delta]^2 {-1, +1}, n], aj}], #[[2]] > 0 &]]
  ]
rearrange[ua_, ub_, iterations_: 100] := Block[{tmp = ua},
  Do[tmp = improveN[tmp, ub, Floor[.1 Length@ua]];, {i, iterations}]; 
  tmp]
rearrangeImg[a_, b_, iterations_: 100] := With[{imgdst = loadImgur[b]},
  listToImg[rearrange[
    RandomSample@imgToList@loadImgur[a],
    imgToList@imgdst, iterations], First@ImageDimensions@imgdst]]
rearrangeImg["JXgho","itzIe"]

Results

Gothic to Mona Lisa. Left: Using LAB color space (delta=0). Right: Using RBG color space (delta=0) img7 img8

Gothic to Mona Lisa. Left: RGB color space, delta=0.05. Right: RGB color space, delta=0.15. img9 img10

The following images show animations for around 3,500,000 swaps with RGB color space and delta=0.

img1 img2 img3 img4 img5 img6

Danvil

Posted 2014-07-09T04:03:39.377

Reputation: 250

Looks like aditsu's way, but I am looking forward to your results. – Leif – 2014-07-12T18:42:53.507

5

Processing

The source and palette are shown side-by-side, and there is an animation of the pixels being taken from the palette;

In the line int i = chooseIndexIncremental();, you can change the chooseIndex* functions to see the selection order of the pixels.

int scanRate = 20; // pixels per frame

// image filenames
String source = "N6IGO.png";
String palette = "JXgho.png";

PImage src, pal, res;
int area;
int[] lut;
boolean[] processed;
boolean[] taken;
int count = 0;

void start() {
  //size(800, 600);

  src = loadImage(source);
  pal = loadImage(palette);

  size(src.width + pal.width, max(src.height, pal.height));

  src.loadPixels();
  pal.loadPixels();

  int areaSrc = src.pixels.length;
  int areaPal = pal.pixels.length;

  if (areaSrc != areaPal) {
    println("Areas mismatch: src: " + areaSrc + ", pal: " + areaPal);
    return;
  }

  area = areaSrc;

  println("Area: " + area);

  lut = new color[area];
  taken = new boolean[area];
  processed = new boolean[area];

  randomSeed(1);
}

void draw() {
  background(0);
  image(src, 0, 0);
  image(pal, src.width, 0);

  for (int k = 0; k < scanRate; k ++)
  if (count < area) {
    // choose from chooseIndexRandom, chooseIndexSkip and chooseIndexIncremental
    int i = chooseIndexIncremental();
    process(i);

    processed[i] = true;
    count ++;
  }
}

int chooseIndexRandom() {
  int i = 0;
  do i = (int) random(area); while (processed[i]);
  return i;
}

int chooseIndexSkip(int n) {
  int i = (n * count) % area;
  while (processed[i] || i >= area) i = (int) random(area);
  return i;
}

int chooseIndexIncremental() {
  return count;
}

void process(int i) {
  lut[i] = findPixel(src.pixels[i]);
  taken[lut[i]] = true;

  src.loadPixels();
  src.pixels[i] = pal.pixels[lut[i]];
  src.updatePixels();

  pal.loadPixels();
  pal.pixels[lut[i]] = color(0);
  pal.updatePixels();

  stroke(src.pixels[i]);
  int sy = i / src.width;
  int sx = i % src.width;

  int j = lut[i];
  int py = j / pal.width;
  int px = j % pal.width;
  line(sx, sy, src.width + px, py);
}

int findPixel(color c) {
  int best;
  do best = (int) random(area); while (taken[best]);
  float bestDist = colorDist(c, pal.pixels[best]);

  for (int k = 0; k < area; k ++) {
    if (taken[k]) continue;
    color c1 = pal.pixels[k];
    float dist = colorDist(c, c1);
    if (dist < bestDist) {
      bestDist = dist;
      best = k;
    }
  }
  return best;
}

float colorDist(color c1, color c2) {
  return S(red(c1) - red(c2)) + S(green(c1) - green(c2)) + S(blue(c1) - blue(c2));
}

float S(float x) { return x * x; }

American Gothic -> Mona Lisa, incremental

incremental

American Gothic -> Mona Lisa, random

random

Ming-Tang

Posted 2014-07-09T04:03:39.377

Reputation: 5 383

2How does it look if you use the rainbow spheres palette? – treat your mods well – 2014-07-11T16:29:36.133

5

C-Sharp

No new/exciting ideas, but I thought I'd give it a try. Simply sorts the pixels, prioritizing brightness over saturation over hue. The code is reasonably short though, for what its worth.

EDIT: Added an even shorter version

using System;
using System.Drawing;
using System.Collections.Generic;

class Program
{
    static void Main(string[] args)
    {
        Bitmap sourceImg = new Bitmap("TheScream.png"),
            arrangImg = new Bitmap("StarryNight.png"),
            destImg = new Bitmap(arrangImg.Width, arrangImg.Height);

        List<Pix> sourcePixels = new List<Pix>(), arrangPixels = new List<Pix>();

        for (int i = 0; i < sourceImg.Width; i++)
            for (int j = 0; j < sourceImg.Height; j++)
                sourcePixels.Add(new Pix(sourceImg.GetPixel(i, j), i, j));

        for (int i = 0; i < arrangImg.Width; i++)
            for (int j = 0; j < arrangImg.Height; j++)
                arrangPixels.Add(new Pix(arrangImg.GetPixel(i, j), i, j));

        sourcePixels.Sort();
        arrangPixels.Sort();

        for (int i = 0; i < arrangPixels.Count; i++)
            destImg.SetPixel(arrangPixels[i].x,
                             arrangPixels[i].y,
                             sourcePixels[i].col);

        destImg.Save("output.png");
    }
}

class Pix : IComparable<Pix>
{
    public Color col;
    public int x, y;
    public Pix(Color col, int x, int y)
    {
        this.col = col;
        this.x = x;
        this.y = y;
    }

    public int CompareTo(Pix other)
    {
        return(int)(255 * 255 * 255 * (col.GetBrightness() - other.col.GetBrightness())
                + (255 * (col.GetHue() - other.col.GetHue()))
                + (255 * 255 * (col.GetSaturation() - other.col.GetSaturation())));
    }
}

Sample:

enter image description here

+

enter image description here

=

enter image description here

Timon Knigge

Posted 2014-07-09T04:03:39.377

Reputation: 181

5

Java

import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import javax.imageio.ImageIO;

/**
 *
 * @author Quincunx
 */
public class PixelRearrangerMK2 {

    public static void main(String[] args) throws IOException {
        BufferedImage source = ImageIO.read(resource("Raytraced Spheres.png"));
        BufferedImage palette = ImageIO.read(resource("American Gothic.png"));
        BufferedImage result = rearrange(source, palette);
        ImageIO.write(result, "png", resource("result.png"));
        validate(palette, result);
    }

    public static BufferedImage rearrange(BufferedImage source, BufferedImage palette) {
        List<Color> sColors = Color.getColors(source);
        List<Color> pColors = Color.getColors(palette);
        Collections.sort(sColors);
        Collections.sort(pColors);

        BufferedImage result = new BufferedImage(source.getWidth(), source.getHeight(), BufferedImage.TYPE_INT_RGB);
        Iterator<Color> sIter = sColors.iterator();
        Iterator<Color> pIter = pColors.iterator();

        while (sIter.hasNext()) {
            Color s = sIter.next();
            Color p = pIter.next();

            result.setRGB(s.x, s.y, p.rgb);
        }
        return result;
    }

    public static class Color implements Comparable {
        int x, y;
        int rgb;
        double hue;

        private int r, g, b;

        public Color(int x, int y, int rgb) {
            this.x = x;
            this.y = y;
            this.rgb = rgb;
            r = (rgb & 0xFF0000) >> 16;
            g = (rgb & 0x00FF00) >> 8;
            b = rgb & 0x0000FF;
            hue = Math.atan2(Math.sqrt(3) * (g - b), 2 * r - g - b);
        }

        @Override
        public int compareTo(Object o) {
            Color c = (Color) o;
            return hue < c.hue ? -1 : hue == c.hue ? 0 : 1;
        }

        public static List<Color> getColors(BufferedImage img) {
            List<Color> result = new ArrayList<>();
            for (int y = 0; y < img.getHeight(); y++) {
                for (int x = 0; x < img.getWidth(); x++) {
                    result.add(new Color(x, y, img.getRGB(x, y)));
                }
            }
            return result;
        }
    }

    //Validation and util methods follow
    public static void validate(BufferedImage palette, BufferedImage result) {
        List<Integer> paletteColors = getColorsAsInt(palette);
        List<Integer> resultColors = getColorsAsInt(result);
        Collections.sort(paletteColors);
        Collections.sort(resultColors);
        System.out.println(paletteColors.equals(resultColors));
    }

    public static List<Integer> getColorsAsInt(BufferedImage img) {
        int width = img.getWidth();
        int height = img.getHeight();
        List<Integer> colors = new ArrayList<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                colors.add(img.getRGB(x, y));
            }
        }
        Collections.sort(colors);
        return colors;
    }

    public static File resource(String fileName) {
        return new File(System.getProperty("user.dir") + "/Resources/" + fileName);
    }
}

Here's a completely different idea. I create a list of the colors of each image, then I sort according to hue, which is computed by wikipedia's formula:

enter image description here

Unlike my other answer, this is extremely fast. It takes about 2 seconds, including the validation.

The result is some abstract art. Here are some images (mouseover to see to/from):

enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here

Justin

Posted 2014-07-09T04:03:39.377

Reputation: 19 757

5It looks like something Predator would see o_O – eithed – 2014-07-11T21:49:09.053

These are rather scary, but they are indeed correct! – Calvin's Hobbies – 2014-07-11T21:53:03.437

1@Calvin'sHobbies How is this scary? I call it beauty. – Justin – 2014-07-11T21:59:00.703

3Their faces are blank and eerie...but they do have a haunting beauty. – Calvin's Hobbies – 2014-07-12T00:55:09.667

Any one of these looks like something from the SCP wiki or the Year Zero NIN ARG. – undergroundmonorail – 2014-07-12T21:17:01.503

1The spheres are awesome. – siledh – 2014-07-17T08:06:19.357

5

Python

Well, I decided I might as well post my solution, since I spent the time to do it. Essentially, what I figured I would do is get the raw pixel data of the images, sort the the data by brightness, and then put the values of the same index into a new image. I changed my mind about the brightness, and used luminance instead. I got some pretty good results with this.

from PIL import Image
from optparse import OptionParser


def key_func(arr):
    # Sort the pixels by luminance
    r = 0.2126*arr[0] + 0.7152*arr[1] + 0.0722*arr[2]
    return r


def main():
    # Parse options from the command line
    parser = OptionParser()
    parser.add_option("-p", "--pixels", dest="pixels",
                      help="use pixels from FILE", metavar="FILE")
    parser.add_option("-i", "--input", dest="input", metavar="FILE",
                      help="recreate FILE")
    parser.add_option("-o", "--out", dest="output", metavar="FILE",
                      help="output to FILE", default="output.png")

    (options, args) = parser.parse_args()

    if not options.pixels or not options.input:
        raise Exception("Missing arguments. See help for more info.")

    # Load the images
    im1 = Image.open(options.pixels)
    im2 = Image.open(options.input)

    # Get the images into lists
    px1 = list(im1.getdata())
    px2 = list(im2.getdata())
    w1, h1 = im1.size
    w2, h2 = im2.size

    if w1*h1 != w2*h2:
        raise Exception("Images must have the same number of pixels.")

    # Sort the pixels lists by luminance
    px1_s = sorted(px1, key=key_func)
    px2_s = sorted(px2, key=key_func)

    # Create an array of nothing but black pixels
    arr = [(0, 0, 0)]*w2*h2

    # Create a dict that contains a list of locations with pixel value as key
    # This speeds up the process a lot, since before it was O(n^2)
    locations_cache = {}
    for index, val in enumerate(px2):
        v = str(val)
        if v in locations_cache:
            locations_cache[v].append(index)
        else:
            locations_cache[v] = [index]

    # Loop through each value of the sorted pixels
    for index, val in enumerate(px2_s):
        # Find the original location of the pixel
        # v = px2.index(val)
        v = locations_cache[str(val)].pop(0)
        # Set the value of the array at the given location to the pixel of the
        # equivalent luminance from the source image
        arr[v] = px1_s[index]
        # v2 = px1.index(px1_s[index])
        # Set the value of px2 to an arbitrary value outside of the RGB range
        # This prevents duplicate pixel locations
        # I would use "del px2[v]", but it wouldn't work for some reason
        px2[v] = (512, 512, 512)
        # px1[v2] = (512, 512, 512)
        # Print the percent progress
        print("%f%%" % (index/len(px2)*100))
        """if index % 500 == 0 or index == len(px2_s)-1:
            if h1 > h2:
                size = (w1+w2, h1)
            else:
                size = (w1+w2, h2)
            temp_im1 = Image.new("RGB", im2.size)
            temp_im1.putdata(arr)

            temp_im2 = Image.new("RGB", im1.size)
            temp_im2.putdata(px1)

            temp_im3 = Image.new("RGB", size)
            temp_im3.paste(temp_im1, (0, 0))
            temp_im3.paste(temp_im2, (w2, 0))
            temp_im3.save("still_frames/img_%04d.png" % (index/500))"""

    # Save the image
    im3 = Image.new('RGB', im2.size)
    im3.putdata(arr)
    im3.save(options.output)

if __name__ == '__main__':
    main()

Results

I was pretty happy with the results. It seemed to work consistently for all of the images I put through it.

Starry Night with Scream Pixels

Scream+Starry Night

Starry Night with Rainbow Pixels

Rainbow+Starry Night

Rainbow with Starry Night Pixels

Starry Night+Rainbow

Mona Lisa with Scream Pixels

Scream+Mona Lisa

River with Starry Night Pixels

Starry Night+River

Mona Lisa with American Gothic Pixels

Gothic+Mona Lisa

Mustang with Chevy Pixels

I should probably have scaled the images down given my hardware constraints, but oh well.

Chevy+Mustang

Chevy with Mustang Pixels

Mustang+Chevy

River with Rainbow Pixels

Rainbow+River

Mona Lisa with Rainbow Pixels

Rainbow+Mona Lisa

American Gothic with Rainbow Pixels

Rainbow+Gothic


Update I've added a few more pictures, and here are a couple of animations. The first shows how my method worked, and the second is using the script @Calvin'sHobbies posted.

My method

@Calvin'sHobbies script


Update 2 I added a dict storing the indices of pixels by their color. This made the script way more efficient. To see the original, check the revision history.

Colin Atkinson

Posted 2014-07-09T04:03:39.377

Reputation: 151

5

C++11

In the end, I settled on a relatively simple deterministic greedy algorithm. This is single threaded, but runs in a hair over 4 seconds on my machine.

The basic algorithm works by sorting all the pixels in both the palette and the target image by decreasing luminance (the L of Lab*). Then, for each of those ordered target pixels it searches for the closest match in the first 75 entries of the palette, using the square of the CIEDE2000 distance metric with the luminance of the palette colors clamped to that of the target. (For implementation and debugging of CIEDE2000, this page was very helpful). The best match is then removed from the palette, assigned to the result and the algorithm goes on to the next lightest pixel in the target image.

By using sorted luminance for both the target and the palette, we ensure that the overall luminance (the most visually salient element) of the result matches the target as closely as possible. Using a small window of 75 entries gives it a good shot at finding a matching color of about the right brightness, if there is one. If there isn't one, then the color will be off, but at least the brightness should be consistent. As a result, it degrades fairly gracefully when the palette colors don't match well.

Code

To compile this you will need the ImageMagick++ development libraries. A small CMake file to compile it is also included below.

palette.cpp

#include <Magick++.h>
#include <algorithm>
#include <functional>
#include <utility>
#include <set>

using namespace std;
using namespace Magick;

struct Lab
{
    PixelPacket rgb;
    float L, a, b;

    explicit Lab(
        PixelPacket rgb )
        : rgb( rgb )
    {
        auto R_srgb = static_cast< float >( rgb.red ) / QuantumRange;
        auto G_srgb = static_cast< float >( rgb.green ) / QuantumRange;
        auto B_srgb = static_cast< float >( rgb.blue ) / QuantumRange;
        auto R_lin = R_srgb < 0.04045f ? R_srgb / 12.92f :
            powf( ( R_srgb + 0.055f ) / 1.055f, 2.4f );
        auto G_lin = G_srgb < 0.04045f ? G_srgb / 12.92f :
            powf( ( G_srgb + 0.055f ) / 1.055f, 2.4f );
        auto B_lin = B_srgb < 0.04045f ? B_srgb / 12.92f :
            powf( ( B_srgb + 0.055f ) / 1.055f, 2.4f );
        auto X = 0.4124f * R_lin + 0.3576f * G_lin + 0.1805f * B_lin;
        auto Y = 0.2126f * R_lin + 0.7152f * G_lin + 0.0722f * B_lin;
        auto Z = 0.0193f * R_lin + 0.1192f * G_lin + 0.9502f * B_lin;
        auto X_norm = X / 0.9505f;
        auto Y_norm = Y / 1.0000f;
        auto Z_norm = Z / 1.0890f;
        auto fX = ( X_norm > 216.0f / 24389.0f ?
                    powf( X_norm, 1.0f / 3.0f ) :
                    X_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
        auto fY = ( Y_norm > 216.0f / 24389.0f ?
                    powf( Y_norm, 1.0f / 3.0f ) :
                    Y_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
        auto fZ = ( Z_norm > 216.0f / 24389.0f ?
                    powf( Z_norm, 1.0f / 3.0f ) :
                    Z_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
        L = 116.0f * fY - 16.0f;
        a = 500.0f * ( fX - fY );
        b = 200.0f * ( fY - fZ );
    }

    bool operator<(
        Lab const that ) const
    {
        return ( L > that.L ? true :
                 L < that.L ? false :
                 a > that.a ? true :
                 a < that.a ? false :
                 b > that.b );
    }

    Lab clampL(
        Lab const that ) const
    {
        auto result = Lab( *this );
        if ( result.L > that.L )
            result.L = that.L;
        return result;
    }

    float cieDe2000(
        Lab const that,
        float const k_L = 1.0f,
        float const k_C = 1.0f,
        float const k_H = 1.0f ) const
    {
        auto square = []( float value ){ return value * value; };
        auto degs = []( float rad ){ return rad * 180.0f / 3.14159265359f; };
        auto rads = []( float deg ){ return deg * 3.14159265359f / 180.0f; };
        auto C_1 = hypot( a, b );
        auto C_2 = hypot( that.a, that.b );
        auto C_bar = ( C_1 + C_2 ) * 0.5f;
        auto C_bar_7th = square( square( C_bar ) ) * square( C_bar ) * C_bar;
        auto G = 0.5f * ( 1.0f - sqrtf( C_bar_7th / ( C_bar_7th + 610351562.0f ) ) );
        auto a_1_prime = ( 1.0f + G ) * a;
        auto a_2_prime = ( 1.0f + G ) * that.a;
        auto C_1_prime = hypot( a_1_prime, b );
        auto C_2_prime = hypot( a_2_prime, that.b );
        auto h_1_prime = C_1_prime == 0.0f ? 0.0f : degs( atan2f( b, a_1_prime ) );
        if ( h_1_prime < 0.0f )
            h_1_prime += 360.0f;
        auto h_2_prime = C_2_prime == 0.0f ? 0.0f : degs( atan2f( that.b, a_2_prime ) );
        if ( h_2_prime < 0.0f )
            h_2_prime += 360.0f;
        auto delta_L_prime = that.L - L;
        auto delta_C_prime = C_2_prime - C_1_prime;
        auto delta_h_prime =
            C_1_prime * C_2_prime == 0.0f ? 0 :
            fabs( h_2_prime - h_1_prime ) <= 180.0f ? h_2_prime - h_1_prime :
            h_2_prime - h_1_prime > 180.0f ? h_2_prime - h_1_prime - 360.0f :
            h_2_prime - h_1_prime + 360.0f;
        auto delta_H_prime = 2.0f * sqrtf( C_1_prime * C_2_prime ) *
            sinf( rads( delta_h_prime * 0.5f ) );
        auto L_bar_prime = ( L + that.L ) * 0.5f;
        auto C_bar_prime = ( C_1_prime + C_2_prime ) * 0.5f;
        auto h_bar_prime =
            C_1_prime * C_2_prime == 0.0f ? h_1_prime + h_2_prime :
            fabs( h_1_prime - h_2_prime ) <= 180.0f ? ( h_1_prime + h_2_prime ) * 0.5f :
            h_1_prime + h_2_prime < 360.0f ? ( h_1_prime + h_2_prime + 360.0f ) * 0.5f :
            ( h_1_prime + h_2_prime - 360.0f ) * 0.5f;
        auto T = ( 1.0f
                   - 0.17f * cosf( rads( h_bar_prime - 30.0f ) )
                   + 0.24f * cosf( rads( 2.0f * h_bar_prime ) )
                   + 0.32f * cosf( rads( 3.0f * h_bar_prime + 6.0f ) )
                   - 0.20f * cosf( rads( 4.0f * h_bar_prime - 63.0f ) ) );
        auto delta_theta = 30.0f * expf( -square( ( h_bar_prime - 275.0f ) / 25.0f ) );
        auto C_bar_prime_7th = square( square( C_bar_prime ) ) *
            square( C_bar_prime ) * C_bar_prime;
        auto R_C = 2.0f * sqrtf( C_bar_prime_7th / ( C_bar_prime_7th + 610351562.0f ) );
        auto S_L = 1.0f + ( 0.015f * square( L_bar_prime - 50.0f ) /
                            sqrtf( 20.0f + square( L_bar_prime - 50.0f ) ) );
        auto S_C = 1.0f + 0.045f * C_bar_prime;
        auto S_H = 1.0f + 0.015f * C_bar_prime * T;
        auto R_T = -sinf( rads( 2.0f * delta_theta ) ) * R_C;
        return (
            square( delta_L_prime / ( k_L * S_L ) ) +
            square( delta_C_prime / ( k_C * S_C ) ) +
            square( delta_H_prime / ( k_H * S_H ) ) +
            R_T * delta_C_prime * delta_H_prime / ( k_C * S_C * k_H * S_H ) );
    }

};

Image read_image(
    char * const filename )
{
    auto result = Image( filename );
    result.type( TrueColorType );
    result.matte( true );
    result.backgroundColor( Color( 0, 0, 0, QuantumRange ) );
    return result;
}

template< typename T >
multiset< T > map_image(
    Image const &image,
    function< T( unsigned, PixelPacket ) > const transform )
{
    auto width = image.size().width();
    auto height = image.size().height();
    auto result = multiset< T >();
    auto pixels = image.getConstPixels( 0, 0, width, height );
    for ( auto index = 0; index < width * height; ++index, ++pixels )
        result.emplace( transform( index, *pixels ) );
    return result;
}

int main(
    int argc,
    char **argv )
{
    auto palette = map_image(
        read_image( argv[ 1 ] ),
        function< Lab( unsigned, PixelPacket ) >(
            []( unsigned index, PixelPacket rgb ) {
                return Lab( rgb );
            } ) );

    auto target_image = read_image( argv[ 2 ] );
    auto target_colors = map_image(
        target_image,
        function< pair< Lab, unsigned >( unsigned, PixelPacket ) >(
            []( unsigned index, PixelPacket rgb ) {
                return make_pair( Lab( rgb ), index );
            } ) );

    auto pixels = target_image.setPixels(
        0, 0,
        target_image.size().width(),
        target_image.size().height() );
    for ( auto &&target : target_colors )
    {
        auto best_color = palette.begin();
        auto best_difference = 1.0e38f;
        auto count = 0;
        for ( auto candidate = palette.begin();
              candidate != palette.end() && count < 75;
              ++candidate, ++count )
        {
            auto difference = target.first.cieDe2000(
                candidate->clampL( target.first ) );
            if ( difference < best_difference )
            {
                best_color = candidate;
                best_difference = difference;
            }
        }
        pixels[ target.second ] = best_color->rgb;
        palette.erase( best_color );
    }
    target_image.syncPixels();
    target_image.write( argv[ 3 ] );

    return 0;
}

CMakeList.txt

cmake_minimum_required( VERSION 2.8.11 )
project( palette )
add_executable( palette palette.cpp)
find_package( ImageMagick COMPONENTS Magick++ )
if( ImageMagick_FOUND )
    include_directories( ${ImageMagick_INCLUDE_DIRS} )
    target_link_libraries( palette ${ImageMagick_LIBRARIES} )
endif( ImageMagick_FOUND )

Result

The full album is here. Of the results below, my favorites are probably American Gothic with the Mona Lisa palette, and Starry Night with the Spheres palette.

American Gothic Palette

Mona Lisa Palette

River Palette

The Scream Palette

Spheres Palette

Starry Night Palette

Boojum

Posted 2014-07-09T04:03:39.377

Reputation: 1 301

This looks fantastic! What do you think about how much this could be speeded up? Is there a chance for real time, i.e. 60fps on average hardware? – danijar – 2014-11-04T21:42:51.990

4

C++

Not the shortest code, but generates the answer 'instantly' despite being single-threaded and un-optimized. I am pleased with the results.

I generate two sorted lists of pixels, one for each image, and the sorting is based on a weighted value of 'brightness'. I use 100% green, 50% red and 10% blue to calculate the brightness, weighting it for the human eye (more or less). I then swap pixels in the source image for their same indexed pixel in the palette image, and write out the destination image.

I use the FreeImage library to read/write the image files.

Code

/* Inputs: 2 image files of same area
Outputs: image1 made from pixels of image2*/
#include <iostream>
#include <stdlib.h>
#include "FreeImage.h"
#include <vector>
#include <algorithm>

class pixel
{
public:
    int x, y;
    BYTE r, g, b;
    float val;  //color value; weighted 'brightness'
};

bool sortByColorVal(const pixel &lhs, const pixel &rhs) { return lhs.val > rhs.val; }

FIBITMAP* GenericLoader(const char* lpszPathName, int flag) 
{
    FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;

    // check the file signature and deduce its format
    // (the second argument is currently not used by FreeImage)
    fif = FreeImage_GetFileType(lpszPathName, 0);
    if (fif == FIF_UNKNOWN) 
    {
        // no signature ?
        // try to guess the file format from the file extension
        fif = FreeImage_GetFIFFromFilename(lpszPathName);
    }
    // check that the plugin has reading capabilities ...
    if ((fif != FIF_UNKNOWN) && FreeImage_FIFSupportsReading(fif)) 
    {
        // ok, let's load the file
        FIBITMAP *dib = FreeImage_Load(fif, lpszPathName, flag);
        // unless a bad file format, we are done !
        return dib;
    }
    return NULL;
}

bool GenericWriter(FIBITMAP* dib, const char* lpszPathName, int flag) 
{
    FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
    BOOL bSuccess = FALSE;

    if (dib) 
    {
        // try to guess the file format from the file extension
        fif = FreeImage_GetFIFFromFilename(lpszPathName);
        if (fif != FIF_UNKNOWN) 
        {
            // check that the plugin has sufficient writing and export capabilities ...
            WORD bpp = FreeImage_GetBPP(dib);
            if (FreeImage_FIFSupportsWriting(fif) && FreeImage_FIFSupportsExportBPP(fif, bpp)) 
            {
                // ok, we can save the file
                bSuccess = FreeImage_Save(fif, dib, lpszPathName, flag);
                // unless an abnormal bug, we are done !
            }
        }
    }
    return (bSuccess == TRUE) ? true : false;
}

void FreeImageErrorHandler(FREE_IMAGE_FORMAT fif, const char *message) 
{
    std::cout << std::endl << "*** ";
    if (fif != FIF_UNKNOWN) 
    {
        std::cout << "ERROR: " << FreeImage_GetFormatFromFIF(fif) << " Format" << std::endl;
    }
    std::cout << message;
    std::cout << " ***" << std::endl;
}

FIBITMAP* Convert24BPP(FIBITMAP* dib)
{
    if (FreeImage_GetBPP(dib) == 24) return dib;

    FIBITMAP *dib2 = FreeImage_ConvertTo24Bits(dib);
    FreeImage_Unload(dib);
    return dib2;
}
// ----------------------------------------------------------

int main(int argc, char **argv)
{
    // call this ONLY when linking with FreeImage as a static library
#ifdef FREEIMAGE_LIB
    FreeImage_Initialise();
#endif

    FIBITMAP *src = NULL, *pal = NULL;
    int result = EXIT_FAILURE;

    // initialize my own FreeImage error handler
    FreeImage_SetOutputMessage(FreeImageErrorHandler);

    // print version
    std::cout << "FreeImage version : " << FreeImage_GetVersion() << std::endl;

    if (argc != 4) 
    {
        std::cout << "USAGE : Pic2Pic <source image> <palette image> <output file name>" << std::endl;
        return EXIT_FAILURE;
    }

    // Load the src image
    src = GenericLoader(argv[1], 0);
    if (src) 
    {
        // load the palette image
        pal = GenericLoader(argv[2], 0);

        if (pal) 
        {
            //compare areas
            // if(!samearea) return EXIT_FAILURE;
            unsigned int width_src = FreeImage_GetWidth(src);
            unsigned int height_src = FreeImage_GetHeight(src);
            unsigned int width_pal = FreeImage_GetWidth(pal);
            unsigned int height_pal = FreeImage_GetHeight(pal);

            if (width_src * height_src != width_pal * height_pal)
            {
                std::cout << "ERROR: source and palette images do not have the same pixel area." << std::endl;
                result = EXIT_FAILURE;
            }
            else
            {
                //go to work!

                //first make sure everything is 24 bit:
                src = Convert24BPP(src);
                pal = Convert24BPP(pal);

                //retrieve the image data
                BYTE *bits_src = FreeImage_GetBits(src);
                BYTE *bits_pal = FreeImage_GetBits(pal);

                //make destination image
                FIBITMAP *dst = FreeImage_ConvertTo24Bits(src);
                BYTE *bits_dst = FreeImage_GetBits(dst);

                //make a vector of all the src pixels that we can sort by color value
                std::vector<pixel> src_pixels;
                for (unsigned int y = 0; y < height_src; ++y)
                {
                    for (unsigned int x = 0; x < width_src; ++x)
                    {
                        pixel p;
                        p.x = x;
                        p.y = y;

                        p.b = bits_src[y*width_src * 3 + x * 3];
                        p.g = bits_src[y*width_src * 3 + x * 3 + 1];
                        p.r = bits_src[y*width_src * 3 + x * 3 + 2];

                        //calculate color value using a weighted brightness for each channel
                        //p.val = 0.2126f * p.r + 0.7152f * p.g + 0.0722f * p.b; //from http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html
                        p.val = 0.5f * p.r + p.g + 0.1f * p.b;                      

                        src_pixels.push_back(p);
                    }
                }

                //sort by color value
                std::sort(src_pixels.begin(), src_pixels.end(), sortByColorVal);

                //make a vector of all palette pixels we can use
                std::vector<pixel> pal_pixels;

                for (unsigned int y = 0; y < height_pal; ++y)
                {
                    for (unsigned int x = 0; x < width_pal; ++x)
                    {
                        pixel p;

                        p.b = bits_pal[y*width_pal * 3 + x * 3];
                        p.g = bits_pal[y*width_pal * 3 + x * 3 + 1];
                        p.r = bits_pal[y*width_pal * 3 + x * 3 + 2];

                        p.val = 0.5f * p.r + p.g + 0.1f * p.b;

                        pal_pixels.push_back(p);
                    }
                }

                //sort by color value
                std::sort(pal_pixels.begin(), pal_pixels.end(), sortByColorVal);

                //for each src pixel, match it with same index palette pixel and copy to destination
                for (unsigned int i = 0; i < width_src * height_src; ++i)
                {
                    bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3] = pal_pixels[i].b;
                    bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3 + 1] = pal_pixels[i].g;
                    bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3 + 2] = pal_pixels[i].r;
                }

                // Save the destination image
                bool bSuccess = GenericWriter(dst, argv[3], 0);
                if (!bSuccess)
                {
                    std::cout << "ERROR: unable to save " << argv[3] << std::endl;
                    std::cout << "This format does not support 24-bit images" << std::endl;
                    result = EXIT_FAILURE;
                }
                else result = EXIT_SUCCESS;

                FreeImage_Unload(dst);
            }

            // Free pal
            FreeImage_Unload(pal);
        }

        // Free src
        FreeImage_Unload(src);
    }

#ifdef FREEIMAGE_LIB
    FreeImage_DeInitialise();
#endif

    if (result == EXIT_SUCCESS) std::cout << "SUCCESS!" << std::endl;
    else std::cout << "FAILURE!" << std::endl;
    return result;
}

Results

American Gothic using Mona Lisa palette American Gothic using Mona Lisa palette American Gothic using Rainbow palette American Gothic using Rainbow palette Mona Lisa using Scream palette Mona Lisa using Scream palette Mona Lisa using Rainbow palette Mona Lisa using Rainbow palette Scream using Starry Night palette Scream using Starry Night palette

Darren

Posted 2014-07-09T04:03:39.377

Reputation: 261

4

C#

The points are ordered in random walking, starting from the center. always get the closest color in the palette image. So the last pixels are somewhat very bad.

Results

Gothic Palette

enter image description here

enter image description here

And the american couple visitors from wikipedia

enter image description here

Mona Palette

enter image description here

enter image description here

enter image description here

Code:

I don't know why but code is pretty slow...

public class PixelExchanger
{
    public class ProgressInfo
    {
        public readonly Pixel NewPixel;
        public readonly int Percentage;

        public ProgressInfo(Pixel newPixel, int percentage)
        {
            this.NewPixel = newPixel;
            this.Percentage = percentage;
        }
    }

    public class Pixel
    {
        public readonly int X;
        public readonly int Y;
        public readonly Color Color;

        public Pixel(int x, int y, Color color)
        {
            this.X = x;
            this.Y = y;
            this.Color = color;
        }
    }

    private static Random r = new Random(0);

    private readonly Bitmap Pallete;
    private readonly Bitmap Image;

    private readonly int Width;
    private readonly int Height;

    private readonly Action<ProgressInfo> ProgressCallback;
    private System.Drawing.Image image1;
    private System.Drawing.Image image2;

    private int Area { get { return Width * Height; } }

    public PixelExchanger(Bitmap pallete, Bitmap image, Action<ProgressInfo> progressCallback = null)
    {
        this.Pallete = pallete;
        this.Image = image;

        this.ProgressCallback = progressCallback;

        Width = image.Width;
        Height = image.Height;

        if (Area != pallete.Width * pallete.Height)
            throw new ArgumentException("Image and Pallete have different areas!");
    }

    public Bitmap DoWork()
    {
        var array = GetColorArray();
        var map = GetColorMap(Image);
        var newMap = Go(array, map);

        var bm = new Bitmap(map.Length, map[0].Length);

        for (int i = 0; i < Width; i++)
        {
            for (int j = 0; j < Height; j++)
            {
                bm.SetPixel(i, j, newMap[i][j]);
            }
        }

        return bm;
    }

    public Color[][] Go(List<Color> array, Color[][] map)
    {
        var centralPoint = new Point(Width / 2, Height / 2);

        var q = OrderRandomWalking(centralPoint).ToArray();

        Color[][] newMap = new Color[map.Length][];
        for (int i = 0; i < map.Length; i++)
        {
            newMap[i] = new Color[map[i].Length];
        }

        double pointsDone = 0;

        foreach (var p in q)
        {
            newMap[p.X][p.Y] = Closest(array, map[p.X][p.Y]);

            pointsDone++;

            if (ProgressCallback != null)
            {
                var percent = 100 * (pointsDone / (double)Area);

                var progressInfo = new ProgressInfo(new Pixel(p.X, p.Y, newMap[p.X][p.Y]), (int)percent);

                ProgressCallback(progressInfo);
            }
        }

        return newMap;
    }

    private int[][] GetCardinals()
    {
        int[] nn = new int[] { -1, +0 };
        // int[] ne = new int[] { -1, +1 };
        int[] ee = new int[] { +0, +1 };
        // int[] se = new int[] { +1, +1 };
        int[] ss = new int[] { +1, +0 };
        // int[] sw = new int[] { +1, -1 };
        int[] ww = new int[] { +0, -1 };
        // int[] nw = new int[] { -1, -1 };

        var dirs = new List<int[]>();

        dirs.Add(nn);
        // dirs.Add(ne);
        dirs.Add(ee);
        // dirs.Add(se);
        dirs.Add(ss);
        // dirs.Add(sw);
        dirs.Add(ww);
        // dirs.Add(nw);

        return dirs.ToArray();
    }

    private Color Closest(List<Color> array, Color c)
    {
        int closestIndex = -1;

        int bestD = int.MaxValue;

        int[] ds = new int[array.Count];
        Parallel.For(0, array.Count, (i, state) =>
        {
            ds[i] = Distance(array[i], c);

            if (ds[i] <= 50)
            {
                closestIndex = i;
                state.Break();
            }
            else if (bestD > ds[i])
            {
                bestD = ds[i];
                closestIndex = i;
            }
        });

        var closestColor = array[closestIndex];

        array.RemoveAt(closestIndex);

        return closestColor;
    }

    private int Distance(Color c1, Color c2)
    {
        var r = Math.Abs(c1.R - c2.R);
        var g = Math.Abs(c1.G - c2.G);
        var b = Math.Abs(c1.B - c2.B);
        var s = Math.Abs(c1.GetSaturation() - c1.GetSaturation());

        return (int)s + r + g + b;
    }

    private HashSet<Point> OrderRandomWalking(Point p)
    {
        var points = new HashSet<Point>();

        var dirs = GetCardinals();
        var dir = new int[] { 0, 0 };

        while (points.Count < Width * Height)
        {
            bool inWidthBound = p.X + dir[0] < Width && p.X + dir[0] >= 0;
            bool inHeightBound = p.Y + dir[1] < Height && p.Y + dir[1] >= 0;

            if (inWidthBound && inHeightBound)
            {
                p.X += dir[0];
                p.Y += dir[1];

                points.Add(p);
            }

            dir = dirs.Random(r);
        }

        return points;
    }

    private static Color[][] GetColorMap(Bitmap b1)
    {
        int hight = b1.Height;
        int width = b1.Width;

        Color[][] colorMatrix = new Color[width][];
        for (int i = 0; i < width; i++)
        {
            colorMatrix[i] = new Color[hight];
            for (int j = 0; j < hight; j++)
            {
                colorMatrix[i][j] = b1.GetPixel(i, j);
            }
        }
        return colorMatrix;
    }

    private List<Color> GetColorArray()
    {
        var map = GetColorMap(Pallete);

        List<Color> colors = new List<Color>();

        foreach (var line in map)
        {
            colors.AddRange(line);
        }

        return colors;
    }
}

RMalke

Posted 2014-07-09T04:03:39.377

Reputation: 181

2These are pretty great. They look like photos that have been been burned or left somewhere to rot. – None – 2014-07-16T05:47:33.810

Thanks, A did several algorithms, but the other were much alike the other answers. So I posted the more distinctive – RMalke – 2014-07-16T11:45:24.517

3

C#

Compares colors by how far away they are. Starts from the center.

EDIT: Updated, now should be about 1.5x faster.

American Gothic
enter image description here
The Scream
enter image description here
Starry Night
enter image description here
Marbles
enter image description here
River
enter image description here
Also, here's the yellow Chevy:
enter image description here

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing;
using System.Threading.Tasks;
using System.Diagnostics;

namespace ConsoleApplication1
{
    class Pixel
    {
        public int X = 0;
        public int Y = 0;
        public Color Color = new Color();
        public Pixel(int x, int y, Color clr)
        {
            Color = clr;
            X = x;
            Y = y;
        }
        public Pixel()
        {
        }
    }
    class Vector2
    {
        public int X = 0;
        public int Y = 0;
        public Vector2(int x, int y)
        {
            X = x;
            Y = y;
        }
        public Vector2()
        {
        }
        public double Diagonal()
        {
            return Math.Sqrt((X * X) + (Y * Y));
        }
    }
    class ColorCollection
    {
        Dictionary<Color, int> dict = new Dictionary<Color, int>();
        public ColorCollection()
        {
        }
        public void AddColor(Color color)
        {
            if (dict.ContainsKey(color))
            {
                dict[color]++;
                return;
            }
            dict.Add(color, 1);
        }
        public void UseColor(Color color)
        {
            if (dict.ContainsKey(color))
                dict[color]--;
            if (dict[color] < 1)
                dict.Remove(color);
        }
        public Color FindBestColor(Color color)
        {
            Color ret = dict.First().Key;
            int p = this.CalculateDifference(ret, color);
            foreach (KeyValuePair<Color, int> pair in dict)
            {
                int points = CalculateDifference(pair.Key, color);
                if (points < p)
                {
                    ret = pair.Key;
                    p = points;
                }
            }
            this.UseColor(ret);
            return ret;
        }
        int CalculateDifference(Color c1, Color c2)
        {
            int ret = 0;
            ret = ret + Math.Abs(c1.R - c2.R);
            ret = ret + Math.Abs(c1.G - c2.G);
            ret = ret + Math.Abs(c1.B - c2.B);
            return ret;
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            string img1 = "";
            string img2 = "";
            if (args.Length != 2)
            {
                Console.Write("Where is the first picture located? ");
                img1 = Console.ReadLine();
                Console.Write("Where is the second picture located? ");
                img2 = Console.ReadLine();
            }
            else
            {
                img1 = args[0];
                img2 = args[1];
            }
            Bitmap bmp1 = new Bitmap(img1);
            Bitmap bmp2 = new Bitmap(img2);
            Console.WriteLine("Getting colors....");
            ColorCollection colors = GetColors(bmp1);
            Console.WriteLine("Getting pixels....");
            List<Pixel> pixels = GetPixels(bmp2);
            int centerX = bmp2.Width / 2;
            int centerY = bmp2.Height / 2;
            pixels.Sort((p1, p2) =>
            {
                Vector2 p1_v = new Vector2(Math.Abs(p1.X - centerX), Math.Abs(p1.Y - centerY));
                Vector2 p2_v = new Vector2(Math.Abs(p2.X - centerX), Math.Abs(p2.Y - centerY));
                double d1 = p1_v.Diagonal();
                double d2 = p2_v.Diagonal();
                if (d1 > d2)
                    return 1;
                else if (d1 == d2)
                    return 0;
                return -1;
            });
            Console.WriteLine("Calculating...");
            int k = 0;
            Stopwatch sw = Stopwatch.StartNew();
            for (int i = 0; i < pixels.Count; i++)
            {
                if (i % 100 == 0 && i != 0)
                {
                    float percentage = ((float)i / (float)pixels.Count) * 100;
                    Console.WriteLine(percentage.ToString("0.00") + "% completed(" + i + "/" + pixels.Count + ")");
                    Console.SetCursorPosition(0, Console.CursorTop - 1);
                }
                Color set = colors.FindBestColor(pixels[i].Color);
                pixels[i].Color = set;
                k++;
            }
            sw.Stop();
            Console.WriteLine("Saving...");
            Bitmap result = WritePixelsToBitmap(pixels, bmp2.Width, bmp2.Height);
            result.Save(img1 + ".png");
            Console.WriteLine("Completed in " + sw.Elapsed.TotalSeconds + " seconds. Press a key to exit.");
            Console.ReadKey();
        }
        static Bitmap WritePixelsToBitmap(List<Pixel> pixels, int width, int height)
        {
            Bitmap bmp = new Bitmap(width, height);
            foreach (Pixel pixel in pixels)
            {
                bmp.SetPixel(pixel.X, pixel.Y, pixel.Color);
            }
            return bmp;
        }

        static ColorCollection GetColors(Bitmap bmp)
        {
            ColorCollection ret = new ColorCollection();
            for (int x = 0; x < bmp.Width; x++)
            {
                for (int y = 0; y < bmp.Height; y++)
                {
                    Color clr = bmp.GetPixel(x, y);
                    ret.AddColor(clr);
                }
            }
            return ret;
        }
        static List<Pixel> GetPixels(Bitmap bmp)
        {
            List<Pixel> ret = new List<Pixel>();
            for (int x = 0; x < bmp.Width; x++)
            {
                for (int y = 0; y < bmp.Height; y++)
                {
                    Color clr = bmp.GetPixel(x, y);
                    ret.Add(new Pixel(x, y, clr));
                }
            }
            return ret;
        }
    }
}

user3188175

Posted 2014-07-09T04:03:39.377

Reputation: 329

3

I decided to try using a very similar algorithm as my other answer, but only swapping 2x2 blocks of pixels instead of individual pixels. Unfortunaely, this algorithm adds an additional constraint of requiring the image dimensions to be divisible by 2, which makes the raytraced spheres unusable unless I change the sizes.

I really like some of the results!

American Gothic with river palette:

enter image description here

Mona Lisa with American Gothic palette:

enter image description here

Mona Lisa with river palette:

enter image description here

I tried 4x4 also, and here are my favorites!

Starry Night with the Scream palette:

enter image description here

Mona Lisa with American Gothic palette:

enter image description here

The Scream with the Mona Lisa palette:

enter image description here

American Gothic with the Mona Lisa palette:

enter image description here

LVBen

Posted 2014-07-09T04:03:39.377

Reputation: 290

1Was thinking about doing the same thing + calculate pixel weights based upon square blocks. I very much like the Mona Lisa results - they remind me of image from images thing. Can you by any chance do 4x4 blocks? – eithed – 2014-07-13T08:40:58.110

1@eithedog I tried 4x4 and it looks pretty good. See my updated answer! – LVBen – 2014-07-13T09:07:36.280

3

C#

This is really really really slow, but it does a great job, especially when using the raytraced spheres palette.

enter image description here enter image description here enter image description here enter image description here enter image description here

The Scream palette:

enter image description here enter image description here

Mona Lisa palette:

enter image description here enter image description here enter image description here enter image description here

American Gothic palette:

enter image description here enter image description here

River palette:

enter image description here enter image description here enter image description here

The Starry Night palette:

enter image description here enter image description here

   class Program
   {
      class Pixel
      {
         public int x;
         public int y;
         public Color color;
         public Pixel(int x, int y, Color color)
         {
            this.x = x;
            this.y = y;
            this.color = color;
         }
      }

      static Pixel BaselineColor = new Pixel(0, 0, Color.FromArgb(0, 0, 0, 0));

      static void Main(string[] args)
      {
         string sourceDirectory = "pic" + args[0] + ".png";
         string paletteDirectory = "pic" + args[1] + ".png";

         using (Bitmap source = Bitmap.FromFile(sourceDirectory) as Bitmap)
         {
            List<Pixel> sourcePixels = GetPixels(source).ToList();
            LinkedList<Pixel> palettePixels;

            using (Bitmap palette = Bitmap.FromFile(paletteDirectory) as Bitmap)
            {
               palettePixels = GetPixels(palette) as LinkedList<Pixel>;
            }

            if (palettePixels.Count != sourcePixels.Count)
            {
               throw new Exception("OH NO!!!!!!!!");
            }

            sourcePixels.Sort((x, y) => GetDiff(y, BaselineColor) - GetDiff(x, BaselineColor));

            LinkedList<Pixel> newPixels = new LinkedList<Pixel>();
            foreach (Pixel p in sourcePixels)
            {
               Pixel newPixel = GetClosestColor(palettePixels, p);
               newPixels.AddLast(newPixel);
            }

            foreach (var p in newPixels)
            {
               source.SetPixel(p.x, p.y, p.color);
            }
            source.Save("Out" + args[0] + "to" + args[1] + ".png");
         }
      }

      private static IEnumerable<Pixel> GetPixels(Bitmap source)
      {
         List<Pixel> newList = new List<Pixel>();
         for (int x = 0; x < source.Width; x++)
         {
            for (int y = 0; y < source.Height; y++)
            {
               newList.Add(new Pixel(x, y, source.GetPixel(x, y)));
            }
         }
         return newList;
      }

      private static Pixel GetClosestColor(LinkedList<Pixel> palettePixels, Pixel p)
      {
         Pixel minPixel = palettePixels.First();
         int diff = GetDiff(minPixel, p);
         foreach (var pix in palettePixels)
         {
            int current = GetDiff(pix, p);
            if (current < diff)
            {
               diff = current;
               minPixel = pix;
               if (diff == 0)
               {
                  return minPixel;
               }
            }
         }
         palettePixels.Remove(minPixel);
         return new Pixel(p.x, p.y, minPixel.color);
      }

      private static int GetDiff(Pixel a, Pixel p)
      {
         return GetProx(a.color, p.color);
      }

      private static int GetProx(Color a, Color p)
      {
         int red = (a.R - p.R) * (a.R - p.R);
         int green = (a.G - p.G) * (a.G - p.G);
         int blue = (a.B - p.B) * (a.B - p.B);
         return red + blue + green;
      }
   }

LVBen

Posted 2014-07-09T04:03:39.377

Reputation: 290

3

C++, version 2.1

UPDATED code and images to account for an error I noticed on the river images: FreeImage stores bitmap bytes on scanline boundaries aligned to 32 bits. I needed to account for the pitch in some cases, like river.

Slower than my first answer, taking approximately 15 seconds per image on my machine now, but I prefer most of the results. Adding this as a second answer, because it generates vastly different results in all cases.

This program is a two-pass algorithm; pass 1 is like before, an extremely quick algorithm that makes two lists of pixels sorted by a weighted brightness, and matches up the pairs by index values, copying the results to the destination bitmap.

Pass 2 is a version of aditsu's "random swap" concept. I run random swaps on two pixels in the destination bitmap if the swap would reduce the 'error measurement' in the destination, and terminate this after a threshold of 10000 sequential misses. Increasing the threshold would increase quality, but at the expense of time.

Code

/* Inputs: 2 image files of same area
Outputs: image1 made from pixels of image2*/
#include <iostream>
#include <stdlib.h>
#include "FreeImage.h"
#include <vector>
#include <algorithm>

class pixel
{
public:
    int x, y;
    BYTE r, g, b;
    float val;  //color value; weighted 'brightness'
};

bool sortByColorVal(const pixel &lhs, const pixel &rhs) { return lhs.val > rhs.val; }

FIBITMAP* GenericLoader(const char* lpszPathName, int flag) 
{
    FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;

    // check the file signature and deduce its format
    // (the second argument is currently not used by FreeImage)
    fif = FreeImage_GetFileType(lpszPathName, 0);
    if (fif == FIF_UNKNOWN) 
    {
        // no signature ?
        // try to guess the file format from the file extension
        fif = FreeImage_GetFIFFromFilename(lpszPathName);
    }
    // check that the plugin has reading capabilities ...
    if ((fif != FIF_UNKNOWN) && FreeImage_FIFSupportsReading(fif)) 
    {
        // ok, let's load the file
        FIBITMAP *dib = FreeImage_Load(fif, lpszPathName, flag);
        // unless a bad file format, we are done !
        return dib;
    }
    return NULL;
}

bool GenericWriter(FIBITMAP* dib, const char* lpszPathName, int flag) 
{
    FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
    BOOL bSuccess = FALSE;

    if (dib) 
    {
        // try to guess the file format from the file extension
        fif = FreeImage_GetFIFFromFilename(lpszPathName);
        if (fif != FIF_UNKNOWN) 
        {
            // check that the plugin has sufficient writing and export capabilities ...
            WORD bpp = FreeImage_GetBPP(dib);
            if (FreeImage_FIFSupportsWriting(fif) && FreeImage_FIFSupportsExportBPP(fif, bpp)) 
            {
                // ok, we can save the file
                bSuccess = FreeImage_Save(fif, dib, lpszPathName, flag);
                // unless an abnormal bug, we are done !
            }
        }
    }
    return (bSuccess == TRUE) ? true : false;
}

void FreeImageErrorHandler(FREE_IMAGE_FORMAT fif, const char *message) 
{
    std::cout << std::endl << "*** ";
    if (fif != FIF_UNKNOWN) 
    {
        std::cout << "ERROR: " << FreeImage_GetFormatFromFIF(fif) << " Format" << std::endl;
    }
    std::cout << message;
    std::cout << " ***" << std::endl;
}

FIBITMAP* Convert24BPP(FIBITMAP* dib)
{
    if (FreeImage_GetBPP(dib) == 24) return dib;

    FIBITMAP *dib2 = FreeImage_ConvertTo24Bits(dib);
    FreeImage_Unload(dib);
    return dib2;
}
// ----------------------------------------------------------

int main(int argc, char **argv)
{
    // call this ONLY when linking with FreeImage as a static library
#ifdef FREEIMAGE_LIB
    FreeImage_Initialise();
#endif

    FIBITMAP *src = NULL, *pal = NULL;
    int result = EXIT_FAILURE;

    // initialize my own FreeImage error handler
    FreeImage_SetOutputMessage(FreeImageErrorHandler);

    // print version
    std::cout << "FreeImage version : " << FreeImage_GetVersion() << std::endl;

    if (argc != 4) 
    {
        std::cout << "USAGE : Pic2Pic <source image> <palette image> <output file name>" << std::endl;
        return EXIT_FAILURE;
    }

    // Load the src image
    src = GenericLoader(argv[1], 0);
    if (src) 
    {
        // load the palette image
        pal = GenericLoader(argv[2], 0);

        if (pal) 
        {
            //first make sure everything is 24 bit:
            src = Convert24BPP(src);
            pal = Convert24BPP(pal);

            //compare areas
            // if(!samearea) return EXIT_FAILURE;
            unsigned int width_src = FreeImage_GetWidth(src);
            unsigned int height_src = FreeImage_GetHeight(src);
            unsigned int width_pal = FreeImage_GetWidth(pal);
            unsigned int height_pal = FreeImage_GetHeight(pal);

            //strides! FreeImage stores bitmap scanlines on 32-bit boundaries...old-school shit I haven't had to deal with in a while ;)
            unsigned int pitch_src = FreeImage_GetPitch(src);
            unsigned int pitch_pal = FreeImage_GetPitch(pal);

            if (width_src * height_src != width_pal * height_pal)
            {
                std::cout << "ERROR: source and palette images do not have the same pixel area." << std::endl;
                result = EXIT_FAILURE;
            }
            else
            {
                //go to work!               

                //retrieve the image data
                BYTE *bits_src = FreeImage_GetBits(src);
                BYTE *bits_pal = FreeImage_GetBits(pal);

                //make destination image
                FIBITMAP *dst = FreeImage_ConvertTo24Bits(src);
                BYTE *bits_dst = FreeImage_GetBits(dst);

                //probably same as src, but eh...
                unsigned int pitch_dst = FreeImage_GetPitch(dst);

                //make a vector of all the src pixels that we can sort by color value
                std::vector<pixel> src_pixels;
                for (unsigned int y = 0; y < height_src; ++y)
                {
                    for (unsigned int x = 0; x < width_src; ++x)
                    {
                        pixel p;
                        p.x = x;
                        p.y = y;

                        p.b = bits_src[y*pitch_src + x * 3];
                        p.g = bits_src[y*pitch_src + x * 3 + 1];
                        p.r = bits_src[y*pitch_src + x * 3 + 2];

                        //calculate color value using a weighted brightness for each channel
                        //p.val = 0.2126f * p.r + 0.7152f * p.g + 0.0722f * p.b; //from http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html
                        p.val = 0.5f * p.r + p.g + 0.1f * p.b;                      

                        src_pixels.push_back(p);
                    }
                }

                //sort by color value
                std::sort(src_pixels.begin(), src_pixels.end(), sortByColorVal);

                //make a vector of all palette pixels we can use
                std::vector<pixel> pal_pixels;

                for (unsigned int y = 0; y < height_pal; ++y)
                {
                    for (unsigned int x = 0; x < width_pal; ++x)
                    {
                        pixel p;

                        p.b = bits_pal[y*pitch_pal + x * 3];
                        p.g = bits_pal[y*pitch_pal + x * 3 + 1];
                        p.r = bits_pal[y*pitch_pal + x * 3 + 2];

                        p.val = 0.5f * p.r + p.g + 0.1f * p.b;

                        pal_pixels.push_back(p);
                    }
                }

                //sort by color value
                std::sort(pal_pixels.begin(), pal_pixels.end(), sortByColorVal);

                //for each src pixel, match it with same index palette pixel and copy to destination
                for (unsigned int i = 0; i < width_src * height_src; ++i)
                {
                    bits_dst[src_pixels[i].y * pitch_src + src_pixels[i].x * 3] = pal_pixels[i].b;
                    bits_dst[src_pixels[i].y * pitch_src + src_pixels[i].x * 3 + 1] = pal_pixels[i].g;
                    bits_dst[src_pixels[i].y * pitch_src + src_pixels[i].x * 3 + 2] = pal_pixels[i].r;
                }

                //improve the destination via random swaps until we miss threshold swaps in a row
                unsigned int threshold = 10000;
                unsigned int missed = 0;
                unsigned int index1, index2, index1_dst, index2_dst;
                unsigned int distance_squared_current, distance_squared_swapped;
                BYTE tr, tg, tb;
                unsigned int x1, x2, y1, y2;

                srand(42); //for good luck!...and, er, repeatable results. Should maybe re-seed periodically if threshold is high.
                for (;;)
                {
                    //not assuming pitch_src == pitch_dst, for now

                    x1 = rand() % width_src;
                    x2 = rand() % width_src;
                    y1 = rand() % height_src;
                    y2 = rand() % height_src;
                    index1 = (x1) * 3 + (y1) * pitch_src;
                    index2 = (x2) * 3 + (y2) * pitch_src;
                    index1_dst = (x1)* 3 + (y1)* pitch_dst;
                    index2_dst = (x2)* 3 + (y2)* pitch_dst;

                    distance_squared_current = 
                        ((bits_src[index1 + 2] - bits_dst[index1_dst + 2]) * (bits_src[index1 + 2] - bits_dst[index1_dst + 2]) >> 3) +
                        (bits_src[index1 + 1] - bits_dst[index1_dst + 1]) * (bits_src[index1 + 1] - bits_dst[index1_dst + 1]) +
                        ((bits_src[index1] - bits_dst[index1_dst]) * (bits_src[index1] - bits_dst[index1_dst]) >> 1 )+
                        ((bits_src[index2 + 2] - bits_dst[index2_dst + 2]) * (bits_src[index2 + 2] - bits_dst[index2_dst + 2]) >> 3) +
                        (bits_src[index2 + 1] - bits_dst[index2_dst + 1]) * (bits_src[index2 + 1] - bits_dst[index2_dst + 1]) +
                        ((bits_src[index2] - bits_dst[index2_dst]) * (bits_src[index2] - bits_dst[index2_dst]) >> 1);

                    distance_squared_swapped = 
                        ((bits_src[index1 + 2] - bits_dst[index2_dst + 2]) * (bits_src[index1 + 2] - bits_dst[index2_dst + 2]) >> 3) +
                        (bits_src[index1 + 1] - bits_dst[index2_dst + 1]) * (bits_src[index1 + 1] - bits_dst[index2_dst + 1]) +
                        ((bits_src[index1] - bits_dst[index2_dst]) * (bits_src[index1] - bits_dst[index2_dst]) >> 1)+
                        ((bits_src[index2 + 2] - bits_dst[index1_dst + 2]) * (bits_src[index2 + 2] - bits_dst[index1_dst + 2]) >> 3) +
                        (bits_src[index2 + 1] - bits_dst[index1_dst + 1]) * (bits_src[index2 + 1] - bits_dst[index1_dst + 1]) +
                        ((bits_src[index2] - bits_dst[index1_dst]) * (bits_src[index2] - bits_dst[index1_dst]) >> 1);

                    if (distance_squared_swapped < distance_squared_current)
                    {
                        missed = 0;

                        tb = bits_dst[index1_dst];
                        tg = bits_dst[index1_dst + 1];
                        tr = bits_dst[index1_dst + 2];

                        bits_dst[index1_dst] = bits_dst[index2_dst];                        
                        bits_dst[index1_dst + 1] = bits_dst[index2_dst + 1];
                        bits_dst[index1_dst + 2] = bits_dst[index2_dst + 2];

                        bits_dst[index2_dst] = tb;                      
                        bits_dst[index2_dst + 1] = tg;
                        bits_dst[index2_dst + 2] = tr;
                    }

                    missed += 1;
                    if (missed > threshold) break; //done improvements
                }

                // Save the destination image
                bool bSuccess = GenericWriter(dst, argv[3], 0);
                if (!bSuccess)
                {
                    std::cout << "ERROR: unable to save " << argv[3] << std::endl;
                    std::cout << "This format does not support 24-bit images" << std::endl;
                    result = EXIT_FAILURE;
                }
                else result = EXIT_SUCCESS;

                FreeImage_Unload(dst);
            }

            // Free pal
            FreeImage_Unload(pal);
        }

        // Free src
        FreeImage_Unload(src);
    }

#ifdef FREEIMAGE_LIB
    FreeImage_DeInitialise();
#endif

    if (result == EXIT_SUCCESS) std::cout << "SUCCESS!" << std::endl;
    else std::cout << "FAILURE!" << std::endl;
    return result;
}

Results

American Gothic palette

enter image description here enter image description here enter image description here enter image description here enter image description here

Mona Lisa palette

enter image description here enter image description here enter image description here enter image description here enter image description here

Rainbow palette (toughy for this program!)

enter image description here enter image description here enter image description here enter image description here enter image description here

River palette

enter image description here enter image description here enter image description here enter image description here enter image description here

Scream palette

enter image description here enter image description here enter image description here enter image description here enter image description here

Starry Night palette

enter image description here enter image description here enter image description here enter image description here enter image description here

and for fun: Mustang in the Camaro palette

enter image description here

Sorry for the length of this post, but I loved this puzzle; it kept me thinking and tinkering, off and on all day.

Darren

Posted 2014-07-09T04:03:39.377

Reputation: 261

3

Java - Another Mapping Approach

Edit 1: After that was shared in a "maths" environment on G+, we all seem to use matching approaches with various ways to circumvent complexity.

Edit 2: I messed up the images in my google drive and restarted, so the old links do not work any more. Sorry, I am still working on more reputation for more links.

Edit 3: Reading the other posts I got some inspirations. I got the programm faster now and reinvested some CPU time, to do some changes depending on target image location.

Edit 4: New program version. Faster! Special treatment of both areas with sharp corners and very smooth changes (helps a lot with the ray tracing, but gives the Mona Lisa occasional red eyes)! Ability to generate intermediate frames from animations!

I really loved the idea and Quincunx solution kind of intrigued me. So I thought I might well be able add my 2 Euro cent.

Idea was, that we obviously need a (somehow close) mapping between two colour palettes.

With this idea I spent the first night trying to tweek a stable marriage algorithm to run fast and with the memory of my PC on 123520 candidates. While I got into the memory range, I found the runtime problem unsolvable.

Second night I decided to just further and dive into the Hungarian Algorithm which promised to provide even approximation properties, i.e. minimum distance between colors in either image. Fortunately I found 3 ready to plug Java implementations of this (not counting many semi finished student assignments which start to make it really hard to google for elementary algorithms). But as one might have expected, Hungarian Algorithms are even worse in terms of running time and memory usage. Even worse, all 3 implementations I tested, returned occasional wrong results. I shiver when I think of other programms, which might be based on these.

Third approach (end of second night) was easy, quick, fast and after all not that bad: Sort colours in both images by luminosity and simple map by ranking, i.e. darkest to darkest, second darkest to second darkest. This immediately creates sharp looking black and white reconstruction with some random colour sprayed around.

*Approach 4 and final so far (morning of second night) starts with the above luminosity mapping and adds local corrections to it by applying Hungarian algorithms to various overlapping sequences of pixels. This way I got better mapping and worked around both the complexity of the problem and the bugs in the implementations.

So here is some Java code, some parts might look similar to other Java code posted here. The hungarian used is a patched version of John Millers originally in the ontologySimilariy project. This was way fastest I found and showed the fewest bugs.

import java.awt.image.BufferedImage;
import java.io.File;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Set;
import java.util.HashSet;
import java.util.Map;
import java.util.HashMap;
import java.util.List;
import javax.imageio.ImageIO;

/**
 *
 */
public class PixelRearranger {

    private final String mode;

    public PixelRearranger(String mode)
    {
        this.mode = mode;
    }

    public final static class Pixel {
        final BufferedImage img;
        final int val;
        final int r, g, b;
        final int x, y;

        public Pixel(BufferedImage img, int x, int y) {
            this.x = x;
            this.y = y;
            this.img = img;
            if ( img != null ) {
                val = img.getRGB(x,y);
                r = ((val & 0xFF0000) >> 16);
                g = ((val & 0x00FF00) >> 8);
                b = ((val & 0x0000FF));
            } else {
                val = r = g = b = 0;
            }
        }

        @Override
        public int hashCode() {
            return x + img.getWidth() * y + img.hashCode();
        }

        @Override
        public boolean equals(Object o) {
            if ( !(o instanceof Pixel) ) return false;
            Pixel p2 = (Pixel) o;
            return p2.x == x && p2.y == y && p2.img == img;
        }

        public double cd() {
            double x0 = 0.5 * (img.getWidth()-1);
            double y0 = 0.5 * (img.getHeight()-1);
            return Math.sqrt(Math.sqrt((x-x0)*(x-x0)/x0 + (y-y0)*(y-y0)/y0));
        }

        @Override
        public String toString() { return "P["+r+","+g+","+b+";"+x+":"+y+";"+img.getWidth()+":"+img.getHeight()+"]"; }
    }

    public final static class Pair
        implements Comparable<Pair>
    {   
        public Pixel palette, from;
        public double d;

        public Pair(Pixel palette, Pixel from)
        {
            this.palette = palette;
            this.from = from;
            this.d = distance(palette, from);
        }

        @Override
        public int compareTo(Pair e2)
        {
            return sgn(e2.d - d);
        }

        @Override
        public String toString() { return "E["+palette+from+";"+d+"]"; }
    }

    public static int sgn(double d) { return d > 0.0 ? +1 : d < 0.0 ? -1 : 0; }

    public final static int distance(Pixel p, Pixel q)
    {
        return 3*(p.r-q.r)*(p.r-q.r) + 6*(p.g-q.g)*(p.g-q.g) + (p.b-q.b)*(p.b-q.b);
    }

    public final static Comparator<Pixel> LUMOSITY_COMP = (p1,p2) -> 3*(p1.r-p2.r)+6*(p1.g-p2.g)+(p1.b-p2.b);


    public final static class ArrangementResult
    {
        private List<Pair> pairs;

        public ArrangementResult(List<Pair> pairs)
        {
            this.pairs = pairs;
        }

        /** Provide the output image */
        public BufferedImage finalImage()
        {
            BufferedImage target = pairs.get(0).from.img;
            BufferedImage res = new BufferedImage(target.getWidth(),
                target.getHeight(), BufferedImage.TYPE_INT_RGB);
            for(Pair p : pairs) {
                Pixel left = p.from;
                Pixel right = p.palette;
                res.setRGB(left.x, left.y, right.val);
            }
            return res;
        }

        /** Provide an interpolated image. 0 le;= alpha le;= 1 */
        public BufferedImage interpolateImage(double alpha)
        {
            BufferedImage target = pairs.get(0).from.img;
            int wt = target.getWidth(), ht = target.getHeight();
            BufferedImage palette = pairs.get(0).palette.img;
            int wp = palette.getWidth(), hp = palette.getHeight();
            int w = Math.max(wt, wp), h = Math.max(ht, hp);
            BufferedImage res = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
            int x0t = (w-wt)/2, y0t = (h-ht)/2;
            int x0p = (w-wp)/2, y0p = (h-hp)/2;
            double a0 = (3.0 - 2.0*alpha)*alpha*alpha;
            double a1 = 1.0 - a0;
            for(Pair p : pairs) {
                Pixel left = p.from;
                Pixel right = p.palette;
                int x = (int) (a1 * (right.x + x0p) + a0 * (left.x + x0t));
                int y = (int) (a1 * (right.y + y0p) + a0 * (left.y + y0t));
                if ( x < 0 || x >= w ) System.out.println("x="+x+", w="+w+", alpha="+alpha);
                if ( y < 0 || y >= h ) System.out.println("y="+y+", h="+h+", alpha="+alpha);
                res.setRGB(x, y, right.val);
            }
            return res;
        }
    }

    public ArrangementResult rearrange(BufferedImage target, BufferedImage palette)
    {
        List<Pixel> targetPixels = getColors(target);
        int n = targetPixels.size();
        System.out.println("total Pixels "+n);
        Collections.sort(targetPixels, LUMOSITY_COMP);

        final double[][] energy = energy(target);

        List<Pixel> palettePixels = getColors(palette);
        Collections.sort(palettePixels, LUMOSITY_COMP);

        ArrayList<Pair> pairs = new ArrayList<>(n);
        for(int i = 0; i < n; i++) {
            Pixel pal = palettePixels.get(i);
            Pixel to = targetPixels.get(i);
            pairs.add(new Pair(pal, to));
        }
        correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.b - p1.d*p1.from.b));
        correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.r - p1.d*p1.from.r));
        // generates visible circular artifacts: correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.cd() - p1.d*p1.from.cd()));
        correct(pairs, (p1,p2) -> sgn(energy[p2.from.x][p2.from.y]*p2.d - energy[p1.from.x][p1.from.y]*p1.d));
        correct(pairs, (p1,p2) -> sgn(p2.d/(1+energy[p2.from.x][p2.from.y]) - p1.d/(1+energy[p1.from.x][p1.from.y])));
        // correct(pairs, null);
        return new ArrangementResult(pairs);

    }

    /**
     * derive an energy map, to detect areas of lots of change.
     */
    public double[][] energy(BufferedImage img)
    {
        int n = img.getWidth();
        int m = img.getHeight();
        double[][] res = new double[n][m];
        for(int x = 0; x < n; x++) {
            for(int y = 0; y < m; y++) {
                int rgb0 = img.getRGB(x,y);
                int count = 0, sum = 0;
                if ( x > 0 ) {
                    count++; sum += dist(rgb0, img.getRGB(x-1,y));
                    if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x-1,y-1)); }
                    if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x-1,y+1)); }
                }
                if ( x < n-1 ) {
                    count++; sum += dist(rgb0, img.getRGB(x+1,y));
                    if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x+1,y-1)); }
                    if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x+1,y+1)); }
                }
                if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x,y-1)); }
                if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x,y+1)); }
                res[x][y] = Math.sqrt((double)sum/count);
            }
        }
        return res;
    }

    public int dist(int rgb0, int rgb1) {
        int r0 = ((rgb0 & 0xFF0000) >> 16);
        int g0 = ((rgb0 & 0x00FF00) >> 8);
        int b0 = ((rgb0 & 0x0000FF));
        int r1 = ((rgb1 & 0xFF0000) >> 16);
        int g1 = ((rgb1 & 0x00FF00) >> 8);
        int b1 = ((rgb1 & 0x0000FF));
        return 3*(r0-r1)*(r0-r1) + 6*(g0-g1)*(g0-g1) + (b0-b1)*(b0-b1);
    }

    private void correct(ArrayList<Pair> pairs, Comparator<Pair> comp)
    {
        Collections.sort(pairs, comp);
        int n = pairs.size();
        int limit = Math.min(n, 133); // n / 1000;
        int limit2 = Math.max(1, n / 3 - limit);
        int step = (2*limit + 2)/3;
        for(int base = 0; base < limit2; base += step ) {
            List<Pixel> list1 = new ArrayList<>();
            List<Pixel> list2 = new ArrayList<>();
            for(int i = base; i < base+limit; i++) {
                list1.add(pairs.get(i).from);
                list2.add(pairs.get(i).palette);
            }
            Map<Pixel, Pixel> connection = rematch(list1, list2);
            int i = base;
            for(Pixel p : connection.keySet()) {
                pairs.set(i++, new Pair(p, connection.get(p)));
            }
        }
    }

    /**
     * Glue code to do an hungarian algorithm distance optimization.
     */
    public Map<Pixel,Pixel> rematch(List<Pixel> liste1, List<Pixel> liste2)
    {
        int n = liste1.size();
        double[][] cost = new double[n][n];
        Set<Pixel> s1 = new HashSet<>(n);
        Set<Pixel> s2 = new HashSet<>(n);
        for(int i = 0; i < n; i++) {
            Pixel ii = liste1.get(i);
            for(int j = 0; j < n; j++) {
                Pixel ij = liste2.get(j);
                cost[i][j] = -distance(ii,ij);
            }
        }
        Map<Pixel,Pixel> res = new HashMap<>();
        int[] resArray = Hungarian.hungarian(cost);
        for(int i = 0; i < resArray.length; i++) {
            Pixel ii = liste1.get(i);
            Pixel ij = liste2.get(resArray[i]);
            res.put(ij, ii);
        }
        return res;
    }

    public static List<Pixel> getColors(BufferedImage img) {
        int width = img.getWidth();
        int height = img.getHeight();
        List<Pixel> colors = new ArrayList<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                colors.add(new Pixel(img, x, y));
            }
        }
        return colors;
    }

    public static List<Integer> getSortedTrueColors(BufferedImage img) {
        int width = img.getWidth();
        int height = img.getHeight();
        List<Integer> colors = new ArrayList<>(width * height);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                colors.add(img.getRGB(x, y));
            }
        }
        Collections.sort(colors);
        return colors;
    }

    public static void main(String[] args) throws Exception {
        int i = 0;
        String mode = args[i++];
        PixelRearranger pr = new PixelRearranger(mode);
        String a1 = args[i++];
        File in1 = new File(a1);
        String a2 = args[i++];
        File in2 = new File(a2);
        File out = new File(args[i++]);
        //
        BufferedImage target = ImageIO.read(in1);
        BufferedImage palette = ImageIO.read(in2);
        long t0 = System.currentTimeMillis();
        ArrangementResult result = pr.rearrange(target, palette);
        BufferedImage resultImg = result.finalImage();
        long t1 = System.currentTimeMillis();
        System.out.println("took "+0.001*(t1-t0)+" s");
        ImageIO.write(resultImg, "png", out);
        // Check validity
        List<Integer> paletteColors = getSortedTrueColors(palette);
        List<Integer> resultColors = getSortedTrueColors(resultImg);
        System.out.println("validate="+paletteColors.equals(resultColors));
        // In Mode A we do some animation!
        if ( "A".equals(mode) ) {
            for(int j = 0; j <= 50; j++) {
                BufferedImage stepImg = result.interpolateImage(0.02 * j);
                File oa = new File(String.format("anim/%s-%s-%02d.png", a1, a2, j));
                ImageIO.write(stepImg, "png", oa);
            }
        }
    }
}

Current running time is 20 to 30 seconds per above image pair, but there are plenty of tweaks to make it either go faster or maybe get a bit more quality out of it.

Seems like my newbie reputations does not suffice for such many links/images, so here is a textual shortcut to my Google drives folder for image samples: http://goo.gl/qZHTao

The samples I wanted to show first:

People -> Mona Lisa http://goo.gl/mGvq9h

The programm keeps track of all the point coordinates, but I feel exhausted now and do not plan to do animations for now. If I was to spend more time I might do a Hungarian algorithm myself or tweek the local optimization schedule of my programm.

InI4

Posted 2014-07-09T04:03:39.377

Reputation: 31

3

Found this question a bit late, but better than never - here's a solution in

Javascript (Node)

Highlights:

  • Uses RGB color space
  • Runs on my laptop in between 20-70 seconds (node pixels after npm install)
  • 215 lines (including Javadoc-style function comments)
  • Uses a binary search tree through sorted pixels
  • Randomizes source pixel selection to ensure "even" distribution
  • Currently weights pixels on RED component, so my results may be red-biased. For example, the ones from the "Scream" palette give the best results.

var startTime = new Date().getTime();

var fs = require("fs");
var pngjs = require("pngjs").PNG;

/**
 *
 */
var Pixels = function() {};

/**
 *
 */
Pixels.prototype = {
    source: null,
    confirm: null,
    target: null,
    result: {},
};

/**
 * Heavy lifting done here.
 */
Pixels.prototype.repalettize = function() {
    // Indices is count from 0 to LxW, used to reference pixels in the tgtPalette, which are 8? bits wide.
    var indices = [];
    for (var y = 0; y < P.target.height; y++) {
        for (var x = 0; x < P.target.width; x++) {
            indices.push((P.target.width * y + x) << 2);
        }
    }

    var len = indices.length;
    P.result.asBuffer = new Buffer(P.target.height * P.target.width * 4);
    P.result.asArray = [];

    var i, ii;

    while (len) {
        i = Math.floor(Math.random() * len);
        ii = indices[i];

        // Find RGB in source, no need for alpha channel.
        matchIndex = P.findMatch(
            zeropad(P.target.asBuffer[ii], 3) +
            zeropad(P.target.asBuffer[ii + 1], 3) +
            zeropad(P.target.asBuffer[ii + 2], 3)
        );

        matchRgb = P.source.asArray[matchIndex];

        P.result.asArray.push(matchRgb);

        P.result.asBuffer[ii] = matchRgb.substr(0, 3) * 1;
        P.result.asBuffer[ii + 1] = matchRgb.substr(3, 3) * 1;
        P.result.asBuffer[ii + 2] = matchRgb.substr(6, 3) * 1;
        P.result.asBuffer[ii + 3] = 255;

        indices.splice(i, 1);
        P.source.asArray.splice(matchIndex, 1);
        len = indices.length;
    }

    var resultImg = new pngjs({
        filterType: 4
    });

    resultImg.data = P.result.asBuffer;
    resultImg.width = P.target.width;
    resultImg.height = P.target.height;
    resultImg.pack().pipe(fs.createWriteStream('result.png'));

    var endTime = new Date().getTime();
    console.log((endTime - startTime) / 1000 + " seconds, confirming");

    P.doConfirm(P.confirm.asArray, P.result.asArray) ?
        console.log('OK - Source array and result array match.') :
        console.log('ERROR! Source array and result array do not match!');
};

/**
 * Slightly modified binary search tree.
 */
Pixels.prototype.findMatch = function(rgb0) {
    var start = 0;
    var end = P.source.asArray.length;
    var mid;

    while (start + 1 < end) {
        mid = Math.floor((end - start) / 2 + start);
        if (P.source.asArray[mid] < rgb0) {
            start = mid;
        }
        else {
            end = mid;
        }
    }

    return start;
};

/**
 *
 */
Pixels.prototype.doConfirm = function(arr1, arr2) {
    var len1 = arr1.length;
    var len2 = arr2.length;

    if (len1 !== len2) {
        return false;
    }

    arr1.sort();
    arr2.sort();

    for (var i = 0; i < len1; i++) {
        if (arr1[i] !== arr2[i]) {
            return false;
        }
    }

    return true;
};

/**
 * Reads an image from a path, generates required information, and passes information to callback.
 */
Pixels.prototype.read = function(path0, callback0) {
    var width = 0;
    var height = 0;
    var asBuffer = null;
    var asArray = null;

    fs.createReadStream(path0).pipe(new pngjs({ filterType: 4 }))
    .on('metadata', function(meta0) {
        width = meta0.width;
        height = meta0.height;
    })
    .on('parsed', function(buffer0) {
        var x, y, i;
        var arr = [];
        for (y = 0; y < height; y++) {
            for (x = 0; x < width; x++) {
                i = y * width + x << 2;

                arr.push(zeropad(buffer0[i], 3) + zeropad(buffer0[i + 1], 3) + zeropad(buffer0[i + 2], 3));
            }
        }

        callback0({
            width: width,
            height: height,
            asBuffer: buffer0,
            asArray: arr,
        });
    });
};

/**
 * sprintf implementation to ensure 9-digit pixels for sorting.
 */
var zeropad = function(str0, len0) {
    str0 = str0.toString();
    while (str0.length < len0) {
        str0 = "0" + str0;
    }
    return str0;
};

/**
 * Ansynchronous file reads will execute and call this function. After they're all finished, the processing can begin.
 */
var filesRead = 0;
var thenContinue = function(data0) {
    filesRead++;

    if (filesRead === 3) {
        P.source.asArray.sort();
        P.repalettize();
    }
}

/**
 * Information for the source image, where the pixels are taken from.
 */
var thenSaveSource = function(obj0) {
    P.source = obj0;
    thenContinue();
}

/**
 * A copy of the source data used after processing to ensure source pixels match result pixels.
 */
var thenSaveConfirm = function(obj0) {
    P.confirm = obj0;
    thenContinue();
}

/**
 * Information for the target images, which the pixels are matched to.
 */
var thenSaveTarget = function(obj0) {
    P.target = obj0;
    thenContinue();
}

//===== Entry point
var P = new Pixels();
P.read('scream.png', thenSaveSource);
P.read('scream.png', thenSaveConfirm);
P.read('starry.png', thenSaveTarget);

Click on any image to see it larger size. Source images are on the diagonal.

Ben

Posted 2014-07-09T04:03:39.377

Reputation: 131

Are you sure this actually only rearranges pixels (i.e., uses each palette colour only once or as often as in the reference image)? I think there's no way you could, say, from the grey-heavy spheres palette get so much purple as seen in your Scream version with that palette. – ceased to turn counterclockwis – 2015-04-12T16:41:47.760

@leftaroundabout - been a while since I did this but since each pixel is spliced out of the source array, I'm fairly sure it's OK...constructive criticism welcome however. You'll notice the others in the spheres palette (vertical) tend toward purple-ness also...and after looking at the spheres, there is a pink one, a red one, and a purple one, so there is even an argument for a purple-heavy source pixel set. – Ben – 2015-04-13T15:07:38.497

You are violating this instruction: “Each pixel in the Palette must be used exactly once in a unique position in this copy.” – Anders Kaseorg – 2015-05-18T04:24:14.623

@AndersKaseorg - I don't think I am...indices.splice(i, 1); ensures that the source palette is reduced each time. Where do you see the repetition - is there something I'm missing? – Ben – 2015-05-25T13:29:22.957

That line removes a position from the image being filled in, not a color from the source palette. – Anders Kaseorg – 2015-05-26T02:58:28.327

http://i.stack.imgur.com/f8AEn.png has 1025 pixels colored #F03BF0, while the spheres palette only has one such pixel. – Anders Kaseorg – 2015-05-26T03:03:38.750

@AndersKaseorg - ah! thank you - I will delete. I thought things seemed to be going a bit too well... :) – Ben – 2015-05-26T19:50:58.587

@AndersKaseorg - thanks! Refactored. Results not as good as before. :) – Ben – 2015-08-25T12:26:07.663

1

C#

This sorts the pixels for both images by luminosity and then grabs the x and y from the source and the color from the palette in order from darkest to brightest and adds it to a new list which is used to create the new image.

Ultra

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace PictureConverter
{
   using System.Drawing;

   class Program
   {
      struct Pixel
      {
         public int x;
         public int y;
         public Color color;
         public Pixel(int x, int y, Color color)
         {
            this.x = x;
            this.y = y;
            this.color = color;
         }
      }

      static List<Pixel> sourcePixels = new List<Pixel>();
      static List<Pixel> palettePixels = new List<Pixel>();
      static List<Pixel> newPixels = new List<Pixel>();

      static Pixel BaselineColor = new Pixel(0, 0, Color.FromArgb(0, 0, 0, 0));

      static int width;
      static int height;
      static int currentPixel = 0;

      static void Main(string[] args)
      {
         string sourceDirectory = "pic" + args[0] + ".png";
         string paletteDirectory = "pic" + args[1] + ".png";

         using (Bitmap source = Bitmap.FromFile(sourceDirectory) as Bitmap)
         {
            width = source.Width;
            height = source.Height;
            for (int x = 0; x < source.Width; x++)
            {
               for (int y = 0; y < source.Height; y++)
               {
                  sourcePixels.Add(new Pixel(x, y, source.GetPixel(x, y)));
               }
            }

            using (Bitmap palette = Bitmap.FromFile(paletteDirectory) as Bitmap)
            {
               // ReSharper disable once PossibleNullReferenceException
               for (int x = 0; x < palette.Width; x++)
               {
                  for (int y = 0; y < palette.Height; y++)
                  {
                     palettePixels.Add(new Pixel(x, y, palette.GetPixel(x, y)));
                  }
               }
            }

            if (palettePixels.Count != sourcePixels.Count)
            {
               throw new Exception("OH NO!!!!!!!!");
            }

            sourcePixels.Sort((x, y) => GetProx(x, BaselineColor) - GetProx(y, BaselineColor));
            palettePixels.Sort((x, y) => GetProx(x, BaselineColor) - GetProx(y, BaselineColor));

            foreach (Pixel p in sourcePixels)
            {
               Pixel newPixel = GetClosestColor(p);
               newPixels.Add(newPixel);
               //Console.WriteLine(p.x + " " + p.y);
            }

            foreach (var p in newPixels)
            {
               source.SetPixel(p.x, p.y, p.color);
            }
            source.Save("Out" + args[0] + "to" + args[1] + ".png");
         }
      }

      private static Pixel GetClosestColor(Pixel p)
      {
         Pixel ret = palettePixels[currentPixel++];
         return new Pixel(p.x, p.y, ret.color);
      }

      private static int GetProx(Pixel a, Pixel p)
      {
         int red = (a.color.R - p.color.R) * (a.color.R - p.color.R);
         int green = (a.color.G - p.color.G) * (a.color.G - p.color.G);
         int blue = (a.color.B - p.color.B) * (a.color.B - p.color.B);
         return red + blue + green;
      }
   }
}

With the palette list sorted from brightest to darkest instead:

Change this:

palettePixels.Sort((x, y) => GetProx(x, BaselineColor) - GetProx(y, BaselineColor));

to this:

palettePixels.Sort((x, y) => GetProx(y, BaselineColor) - GetProx(x, BaselineColor));

enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here

LVBen

Posted 2014-07-09T04:03:39.377

Reputation: 290

1

This is my first post here, so here we go...

Python 3.4.3

from PIL import Image
import time
from random import randint
from random import shuffle
t0=time.time()
# p1 with palette of p2
im=Image.open('0.png')
a=im.getdata()
x1=im.size[0]
y1=im.size[1]
im.close()
im=Image.open('4.png')
b=list(im.getdata())
im.close()
red=[]
green=[]
blue=[]
c=[]
t=[]
#b.sort()
shuffle(b)
rr=1
gr=2
br=1
ra=0
ga=0
ba=0
rb=0
gb=0
bb=0
rg=0
gg=0
bg=0
for x in range(0,len(b)):
    t=[b[x][0],b[x][1],b[x][2]]
    c.append(t)
    rb+=b[x][0]
    gb+=b[x][1]
    bb+=b[x][2]
rb/=len(b)
gb/=len(b)
bb/=len(b)
g=[]
for x in range(0,len(b)):
    t=[a[x][0],a[x][1],a[x][2]]
    g.append(t)
    rg+=a[x][0]
    gg+=a[x][1]
    bg+=a[x][2]
rg/=len(b)
gg/=len(b)
bg/=len(b)
temp=[]
e=0
ro=rb-rg
go=gb-gg
bo=bb-bg
for x in range(0,len(b)):
    g[x][0]+=ro
    g[x][1]+=go
    g[x][2]+=bo
for One_Republic in range(0,10):
    #'''
    e=0
    f=0
    te=0
    tf=0
    rr=[]
    ts=[]
    for x in range(0,10**5):
        e=randint(0,len(b)-1)
        f=randint(0,len(b)-1)
        te=(c[e][0]-g[e][0])**2+(c[e][1]-g[e][1])**2+(c[e][2]-g[e][2])**2
        tf=(c[f][0]-g[e][0])**2+(c[f][1]-g[e][1])**2+(c[f][2]-g[e][2])**2
        ts=[te,tf]
        if(tf==min(ts)):
            rr=list(c[f])
            c[f]=list(c[e])
            c[e]=list(rr)
    #'''
    e=0
    f=0
    te=0
    tf=0
    rr=[]
    ts=[]
    for Adele in range(0,3):
        for x in range(0,len(b)-2):
            e=x
            f=x+1
            te=(c[e][0]-g[e][0])**2+(c[e][1]-g[e][1])**2+(c[e][2]-g[e][2])**2
            tf=(c[f][0]-g[e][0])**2+(c[f][1]-g[e][1])**2+(c[f][2]-g[e][2])**2
            ts=[te,tf]
            if(tf==min(ts)):
                rr=list(c[f])
                c[f]=list(c[e])
                c[e]=list(rr)
    #'''
b=[]
for x in range(0,len(c)):
    t=(c[x][0],c[x][1],c[x][2])
    b.append(t)
im=Image.new('RGB',(x1,y1),'black')
im.putdata(b)
im.save('pic.png')
print(time.time()-t0)

The code tries to shuffle pixels randomly, then locally, to match another input painting. What it does first is tint the painting that the code is trying to mimic, to make a higher-def picture. The code does have a timer. It now auto-accepts the dimensions of the output file. In my folder,

0 is Starry Night

1 is The Scream

2 is Mona Lisa

3 is American Gothic

4 is Spheres

5 is The River

So the results:

Obligatory Mona Lisas

Mona Lisa with American Gothic Mona Lisa with the Scream Mona Lisa with Starry Night Mona Lisa with Spheres Mona Lisa with The River

My other experiments

Starry Night with Spheres Spheres with Starry Night Starry Night with The Scream The Scream with Mona Lisa Spheres with The Scream

Magenta

Posted 2014-07-09T04:03:39.377

Reputation: 1 322

0

C#

Sorts pixels by Gray Scale and then paints pixels.

using System;
using System.Collections.Generic;
using System.Drawing;
using System.IO;

namespace ImageSwap
{
    public class Pixel : IEquatable<Pixel>, IComparable<Pixel>
    {
        public int x, y;
        public Color rgb;
        public int grayScale;
        public Pixel() { }

        public int CompareTo(Pixel other)
        {
            if (this.grayScale < other.grayScale)
                return 1;
            else if (this.grayScale == other.grayScale)
                return 0;
            else
                return -1;
        }

        public bool Equals(Pixel other)
        {
            return this.grayScale == other.grayScale;
        }

        public override string ToString()
        {
            string s = string.Format("x:{0},y:{1}:gray:{2}", this.x, this.y, this.grayScale);
            return s;
        }
    }

    class PalleteCopy
    {
        private Image src, palette;

        PalleteCopy(Image s, Image p)
        {
            this.src = s;
            this.palette = p;
        }

        public List<Pixel> Analyse(Image img)
        {
            List<Pixel> pxList = new List<Pixel>();
            Bitmap b = (Bitmap)img;
            int w = b.Width;
            int h = b.Height;
            float gray = 0;
            for (int i = 0; i < w; i++)
            {               
                for (int j = 0; j < h; j++)
                {
                    Pixel px = new Pixel();
                    px.x = i;
                    px.y = j;
                    px.rgb = b.GetPixel(i, j);
                    gray = (px.rgb.B + px.rgb.G + px.rgb.R) / 3;
                    px.grayScale = (int)(gray);
                    pxList.Add(px);
                }

            }

            return pxList;
        }

        public void ToFile(List<Pixel> px)
        {
            StreamWriter w = null;
            try
            {
                w = new StreamWriter("C:/pics/pics.log", true);
                foreach (Pixel p in px)
                {
                    w.WriteLine(p.ToString());
                }
            }
            catch (Exception) { }
            finally { if (w != null) w.Close(); }
        }

        public List<Pixel> SortByGray(List<Pixel> pxList)
        {
            List<Pixel> pL = new List<Pixel>();
            foreach(Pixel p in pxList)
            {
                pL.Add(p);
            }
            pL.Sort();
            return pL;
        }

        public Image CreateCopy()
        {
            List<Pixel> pxSrc = this.Analyse(this.src);
            //ToFile(pxSrc);
            //this.Print(this.FromPixels(pxSrc), "C:/pics/p1.png");

            List<Pixel> pxPal = this.Analyse(this.palette);
            //ToFile(pxPal);
            //this.Print(this.FromPixels(pxPal), "C:/pics/p2.png");

            List<Pixel> sortSrc = this.SortByGray(pxSrc);
            //ToFile(sortSrc);
            //this.Print(this.FromPixels(sortSrc), "C:/pics/p3.png");

            List<Pixel> sortPal = this.SortByGray(pxPal);
            //ToFile(sortPal);
            //this.Print(this.FromPixels(sortPal), "C:/pics/p4.png");

            int w = this.src.Width;
            int h = this.src.Height;
            Bitmap copy = new Bitmap(w, h);
            int n = sortPal.Count;
            for(int i=0; i<n; i++)
            {
                copy.SetPixel(sortSrc[i].x, sortSrc[i].y, sortPal[i].rgb);
            }
            return copy;
        }

        public Image FromPixels(List<Pixel> px)
        {
            Bitmap copy = new Bitmap(this.src.Width, this.src.Height);
            int n = px.Count;
            for (int i = 0; i < n; i++)
            {
                copy.SetPixel(px[i].x, px[i].y, px[i].rgb);
            }
            return copy;
        }

        public void Print(Image img, string filename)
        {
            Bitmap b = (Bitmap)img;
            try
            {
                b.Save(filename);
            }
            catch (Exception) { }
        }

        static void Main(string[] args)
        {
            string src = @"C:\pics\source.png";
            string pal = @"C:\pics\palette.png";
            try
            {
                Image s = Image.FromFile(src);
                Image p = Image.FromFile(pal);
                PalleteCopy pc = new PalleteCopy(s, p);
                Image c = pc.CreateCopy();
                pc.Print(c, @"C:\pics\copy.png");

                Console.WriteLine("Success");
            }
            catch (Exception ex) 
            {
                Console.WriteLine("Failed: " + ex.Message);
            }
        }
    }
}

Result image:

enter image description here

bacchusbeale

Posted 2014-07-09T04:03:39.377

Reputation: 1 235

0

Squeak Smalltalk

This is pretty crude, it just scans through the pixels from the top and lets them get the best matches using the built-in Color >> diff:, which sums the differences in RGB and normalizes to 1.

Usage:

  1. Create a PixelArranger object.
  2. #loadSrcImageFile: and #loadDestImageFile:.
  3. #rearrangeLinear. Transcript shows status (number of pixels).
  4. Convert arrangedForm to a Morph with #asMorph and then #openInWorld.

Mona Lisa in the palette of ray-traced balls

balls -> Mona Lisa

Here is the fileout:

'From Squeak4.4 of 11 September 2013 [latest update: #12337] on 13 July 2014 at 1:32:08 pm'!
Object subclass: #PixelArranger
    instanceVariableNames: 'srcForm srcImageFile palette destForm destImageFile arrangedForm'
    classVariableNames: ''
    poolDictionaries: ''
    category: 'PixelArranger'!

!PixelArranger methodsFor: 'image loading' stamp: 'pnr 7/12/2014 18:55'!
loadDest: aForm
    destForm := aForm.

    arrangedForm := Form extent: (aForm extent) depth: (aForm depth).
! !

!PixelArranger methodsFor: 'image loading' stamp: 'pnr 7/12/2014 18:52'!
loadDestImageFile: aFilename
    self loadDest:(Form fromFileNamed: aFilename).
    destImageFile := aFilename.! !

!PixelArranger methodsFor: 'image loading' stamp: 'pnr 7/12/2014 23:49'!
loadSrc: aForm
    srcForm := aForm.

    self buildPalette.! !

!PixelArranger methodsFor: 'image loading' stamp: 'pnr 7/12/2014 18:47'!
loadSrcImageFile: aFilename
    self loadSrc:(Form fromFileNamed: aFilename).
    srcImageFile := aFilename.! !


!PixelArranger methodsFor: 'processing' stamp: 'pnr 7/13/2014 11:41'!
buildPalette
    "We scrape one pixel at a time off the source image and deposit it on to the palette."
    palette removeAll. "clean it first!!"

    srcForm bits do: [:each |
        palette add: (Color colorFromPixelValue: each depth: 32).
        ]! !

!PixelArranger methodsFor: 'processing' stamp: 'pnr 7/13/2014 11:17'!
pickClosestColor: aColor from: aColorCollection
    "Answer the closest color from the available source pixels"
    |  closest minDiff |
    minDiff := 1.0.
    closest := Color black.

    aColorCollection asSet do: [:each | | diff |
        diff := aColor diff: each.
        (diff = 0.0) ifTrue: [ ^each ].
        (diff < minDiff) ifTrue: [ 
            minDiff := diff.
            closest := each.
            ]
        ].

    ^closest.! !

!PixelArranger methodsFor: 'processing' stamp: 'pnr 7/12/2014 23:49'!
rearrangeLinear
    | count arrangedBits |
    count := 0.

    self buildPalette.

    arrangedBits := destForm bits collect:[:each | | closest target |
        target := Color colorFromPixelValue: each depth: 32.
        closest := self pickClosestColor: target from: palette.
        palette remove: closest ifAbsent:[].    
        (count \\ 100 = 0) ifTrue: [Transcript show: 'rearrangeLinear: ', count ; cr].
        count := count + 1 .
        closest pixelValue32.
        ].

    arrangedForm bits: arrangedBits.
! !


!PixelArranger methodsFor: 'initialize-release' stamp: 'pnr 7/12/2014 23:47'!
initialize
    super initialize.
    palette := Bag new.! !

Paul Richter

Posted 2014-07-09T04:03:39.377

Reputation: 770