28
2
To normalize a vector is to scale it to a length of 1 (a unit vector), whilst keeping the direction consistent.
For example, if we wanted to normalize a vector with 3 components, u, we would first find its length:
|u| = sqrt(ux2 + uy2 + uz2)
...and then scale each component by this value to get a length 1 vector.
û = u ÷ |u|
The Challenge
Your task is to write a program or function which, given a non-empty list of signed integers, interprets it as a vector, and normalizes it. This should work for any number of dimensions, for example (test cases rounded to two decimal places):
[20] -> [1]
[-5] -> [-1]
[-3, 0] -> [-1, 0]
[5.5, 6, -3.5] -> [0.62, 0.68, -0.40]
[3, 4, -5, -6] -> [0.32, 0.43, -0.54, -0.65]
[0, 0, 5, 0] -> [0, 0, 1, 0]
Rules:
- You can assume the input list will:
- Have at least one non-zero element
- Only contain numbers within your language's standard floating point range
- Your output should be accurate to at least two decimal places. Returning "infinite precision" fractions / symbolic values is also allowed, if this is how your language internally stores the data.
- Submissions should be either a full program which performs I/O, or a function. Function submissions can either return a new list, or modify the given list in place.
- Builtin vector functions/classes are allowed. Additionally, if your language has a vector type which supports an arbitrary number of dimensions, you can take one of these as input.
This is a code-golf contest, so you should aim to achieve the shortest solution possible (in bytes).
Does it have to have at least two decimal places for every possible input (which is not possible for any standard type of floating point values) or only for the examples you provide? E.g. Steadybox's answer provides 2 decimal places of precision for all your test but he uses ints for the sum of squares which of course fails for almost all inputs (e.g. [0.1, 0.1]). – Christoph – 2017-11-27T11:07:48.390
... now we just wait for a lang with built-in norm function mapped to one char... – None – 2017-11-27T13:55:49.033
It should be to at least 2dp for every possible input @Christoph – FlipTack – 2017-11-27T16:09:21.413
@FlipTack but that rules out basically all languages because floatings points have bigger exponents than mantissa which means they do not always have enough precision to have any decimal places. – Christoph – 2017-11-27T19:34:22.600
Why don't the 6 in the 4th example and the -6 in the 5th respectively normalize to 1 and -1? – Mast – 2017-11-28T13:03:37.033
@Mast because the vector length, not largest component , needs to be scaled to 1. – FlipTack – 2017-11-28T18:36:58.320