Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
Universal functions are vectorized functions that can work on all values in an array at once. This is part of the magic of how things work so fast in NumPy. Let's explore the common ones.
Learn More
 Automatic Vectorization (just immerse yourself, you don't need to understand it all)
 Write your own Ufunc
My Notes for New Way of Thinking
## Linear Algebra
* There is a module for linear algebra, [linalg](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)
* You can solve for a system of equations using the [solve function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve)
* You can create a square 2 dimensional matrix and a constant row vector and solve for each variable column
* You can double check the answer using the inner product or [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html#numpy.dot).
* You can use the `@` to produce the dot product of two arrays.

0:00
When we perform operations on our data all at once without using a loop,

0:04
the operation is said to be vectorized.

0:06
It's not only faster, but we usually ended up writing fewer lines of code.

0:11
Out of the box, there are quite a few NumPy functions that are available for

0:14
you to use that are already vectorized.

0:16
These are referred to as universal functions, or ufuncs.

0:20
When there's one of these universal functions available for

0:23
what you're trying to accomplish, you want to ensure that make use of it.

0:26
Before you start writing a loop, look first to the ufuncs.

0:30
Let's take a look at the more popular examples of these universal functions and

0:35
then I'll show you where to learn more.

0:37
Here are my notes from the linear algebra exercise that we just did.

0:41
So, there's a module for linear algebra and it's linalg.

0:45
There's a link to the documentation if you want it.

0:47
And you can also solve for a system of equations using the solve function,

0:50
which was amazing, right?

0:51
We just made that two dimensional matrix and we had a constant row vector and

0:55
we're able to solve for each variable.

0:57
And then we are able to double check, using the inner product or dot.

1:01
Again, there's a link to the documentation.

1:03
And again, in Python 3 and up, you can use the @ sign

1:08
there to produce a dot product of two arrays, awesome.

1:13
Some of that abstraction that was going on in the linalg solve

1:16
function was using some vector math, right?

1:19
And this is super common in just about every direction you had with NumPy.

1:22
You'll want to perform some operations on two vectors together.

1:27
Let's take a look at what that looks like real quick.

1:30
So how about we make a couple of example arrays?

1:33
Let's go down here.

1:35
And let's make some example arrays.

1:37
So for our purposes, we'll make one dimensional arrays.

1:41
And let's unpack some into a and b, and we'll just use a split method.

1:46
So we'll say np.split,

1:49
let's see let's make a array of one through ten.

1:55
So we'll say np.arange and remember it's exclusive but we'll start with 1 and

2:01
we'll go to 11 up to and not including eleven, it's exclusive.

2:06
And the last parameter of split is how many you want.

2:09
So we wanna take one through ten and split it into two.

2:12
And then let's just double check we got what we wanted, we'll print those out.

2:16
Awesome, so we have 1 through 5 and 6 through 10.

2:20
So what happens when we add these together?

2:24
Now, I know in regular Python world this would make a new list with all of

2:28
the values, right?

2:29
Let's explore happens when we do this within nd arrays.

2:32
So we'll say a plus b, that's pretty cool, right?

2:37
It does vector math, it adds each element as they line up right?

2:41
So we have 6 plus 1 is 7, and we have 7 plus 2 is 9,

2:46
and 8 plus 3 is 11, and so on.

2:49
So the plus operator has been overloaded for these nd arrays, and

2:54
I bet then the other ones must be too, right?

2:57
So what happens if we do a minus b?

3:01
We get an array of negative 5, right?

3:03
So write 1 minus 6 is negative 5, 2 minus 7 is negative 5, and then of course,

3:07
the order matters, just like in math, right?

3:10
So if we do b minus a now, we should get all 5s, awesome.

3:14
So then does that mean that we can do a times b?

3:18
It does, awesome.

3:20
And so then what do you think happens when we add a scalar to that array?

3:25
So what happens when I do a plus 2?

3:29
Interesting, look at that.

3:30
It took the right side, this two and it applied it to every value,

3:37
so 1 plus 2, 2 plus 2, 3 plus 2.

3:41
It's almost as if there was a right side that was the same size,

3:46
something like this.

3:48
So there's this method that you can do.

3:49
You can say np.repeat, so we wanna have a, that's 5.

3:55
We're we have five, an array of 2 that's 5 long.

3:59
So that would look like that.

4:00
And so basically, the same thing is happening if we did a plus that, right?

4:06
We get the same thing.

4:07
So it stretched that two.

4:10
That's pretty neat, right?

4:11
So this ability to just stretch the scalar value across all of the items is

4:16
called broadcasting.

4:17
The value 2 is broadcasted across the array named a.

4:22
Broadcasting is handy because we don't actually need to create this array, right?

4:27
We don't need to do that ourselves.

4:28
The equation automatically assumed what we meant.

4:31
Now you'll see broadcasting relied upon quite a bit.

4:35
These operators have all been overloaded and

4:38
are actually using universal functions or ufuncs.

4:42
And remember, ufuncs are vectorized.

4:46
See how we didn't have to loop through the array ourselves.

4:48
It did all that looping behind the scenes for us.

4:51
Let's pop over to some documentation for these ufuncs.

4:55
One of the great things about this weird naming is I can just search for

5:00
ufuncs and I'm gonna find the documentation.

5:03
There it is, the first hit.

5:05
My wife also says that to me every time I come back from the gym,

5:09
she says, ew, you funks.

5:11
That joke really stinks.

5:13
And I apologize in advance for

5:14
making you think about that bad joke every time you here the term ufuncs.

5:19
This is a helpful page as it details the rules of broadcasting as well as universal

5:23
functions and broadcasting, again, that array structuring that we just saw,

5:27
which is super important in vectorized operations.

5:30
Again, our no looping functions.

5:33
So in the table of contents on this page, which is not showing cuz it's so big,

5:36
I'm gonna shrink this in one, there we go.

5:38
So here's this table of contents.

5:40
Over here we're gonna take a look at these available ufuncs.

5:46
Cool, I'm gonna blow this up one more time, now that we're here.

5:49
And so here's the available ufuncs, and what we can look about, here's the add.

5:53
We just did that, let's go ahead and click into it.

5:58
So you'll see that if you use the style of calling the function that we have extra

6:03
arguments.

6:04
They're optional, right, so they're where = true.

6:07
For example, you can choose which specific elements to operate on, okay?

6:12
And you can choose to cast it to something else as well,

6:15
if that's what you're looking for.

6:16
But these first two are units, this X1 and X2, that's the left and

6:20
the right that we were talking about.

6:22
So you'll notice here in the description that it says that they must be

6:26
broadcastable, which is defined in those broadcasting rules that we skipped over

6:31
just a bit ago in the ufuncs documentation.

6:34
Let's scroll down here, cuz there's some great examples.

6:36
So you'll see that you can add just fine, you can add scalar values together.

6:40
So if you have a scalar plus a scalar it returns what you think it would.

6:45
And the second one is showing another example of broadcasting, which I think we

6:49
should explore, because this is showing a 3 by 3, and then it's adding by a 3 array.

6:54
So let's go ahead and let's copy and paste this.

6:58
So I'm gonna copy this over to our notebook, and I don't know about you, but

7:03
sometimes I have a hard time seeing what it is exactly that's happening.

7:07
So I like to say x1, x2 and now I can take a look at both of then and

7:11
I'm gonna move this down here.

7:14
One of the great things about the notebooks is it lets you keep those

7:17
chevrons in there, those greater then signs, it's not gonna bother anything.

7:20
So that's awesome for copying and pasting.

7:23
So there we go, so we have our array here and this other array here and

7:28
what's gonna happen is we are going to add those together.

7:32
Let me get rid of this.

7:33
We're gonna add those together.

7:35
So let's take a look and see what happens.

7:37
So it broadcasted this array over each row.

7:40
So 0 plus 0, that's 0.

7:42
1 plus 1 is 2.

7:44
2 plus 2 is 4.

7:45
And then again, it took that to the second one, it broadcasted there.

7:48
So 0 plus 3 is 3.

7:50
1 plus 4 is 5.

7:53
See how it's going there?

7:54
It's pretty cool, right?

7:56
And also, scalars can be broadcast to multidimensional arrays.

8:00
So if we also do np.add, 2, we can see that

8:05
the 2 is added to every single one of the values.

8:11
See how it's broadcasted to each end every value?

8:14
So let's switch back to our documentation.

8:17
So there's some more math operations.

8:19
These are what was being called when we were doing the overloading

8:22
basically, right?

8:23
So there's subtract, multiply, divide.

8:25
Let's get through all these.

8:28
Square root, that's handy.

8:31
All right, trigonometry functions.

8:33
Super handy, that is if you need them.

8:35
I know that trig can trigger some people.

8:37
Don't let it.

8:38
The important thing to remember is that you can run these functions against all of

8:43
the values in your array all at once, which is super powerful.

8:47
If you need to use these, you'll be very happy.

8:50
So look at these here.

8:51
Here's hypot for hypotenuse, be able a squared plus b squared equals c squared.

8:57
This creates an array of all the C's.

9:00
Pretty nice, right?

9:03
Here's some low level bit twiddling functions, again handy if you need it

9:06
if you're dealing with binary data and you need to move some stuff around.

9:10
And then here's some comparison functions and this too is operator overloaded.

9:14
Remember when we used the less than sign to see all the minutes that were less

9:18
than 60?

9:19
Well, that was using this less ufunc.

9:21
And then when we wanted to check that that comparison was true and

9:26
it was greater than 0, we used logical_and.

9:30
Now, look here's that warning about not using the word and or or, but

9:35
instead using the bitwise and and or signs.

9:38
And here's another warning that we saw where we needed to make sure that

9:41
the order of operations was used, right?

9:43
So it's again, warning about the fact that 2 and a is evaluated first, awesome.

9:48
This is reminding you to do like we just learned.

9:51
Remember when we used that less than 60?

9:54
The 60 is scalar and we just broadcasted that to each and every element for

9:58
a comparison.

9:59
Broadcasting is pretty straightforward most of the time,

10:03
it's usually either a scalar or a row, like we saw.

10:06
However, if you end up seeing it happening and

10:09
scratch your head about what's going on, you should bring this page back up, right?

10:14
So bring this page back up, cuz up here at the very top, [LAUGH] sorry for

10:16
that scrolling, that might make you sick.

10:18
This broadcasting here.

10:19
These rules are detailed out, and

10:21
it's super handy if you can't figure out what's going on.

10:25
And it's nice to review from time to time.

10:27
I don't wanna overload you with these rules,

10:31
I just want you to know where to find them defined.

10:35
And that's ufuncs for you, super handy, right?

10:38
You definitely want to lean heavily on them.

10:41
There is optimization that happens at the core NumPy level and

10:44
you can actually even write your own.

10:46
Check the teacher's notes for more information.

10:49
So why don't you jot down some notes about ufuncs and vectorization in general,

10:53
maybe toss in a couple words about broadcasting, and I'll do that too.

10:57
And after the break we'll review them and check out some more handy routines.
You need to sign up for Treehouse in order to download course files.
Sign up