**Heads up!** To view this whole video, sign in with your Courses account or enroll in your free 7-day trial.
Sign In
Enroll

Preview

Start a free Courses trial

to watch this video

Universal functions are vectorized functions that can work on all values in an array at once. This is part of the magic of how things work so fast in NumPy. Let's explore the common ones.

#### Learn More

- Automatic Vectorization (just immerse yourself, you don't need to understand it all)
- Write your own Ufunc

#### My Notes for New Way of Thinking

```
## Linear Algebra
* There is a module for linear algebra, [linalg](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)
* You can solve for a system of equations using the [solve function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve)
* You can create a square 2 dimensional matrix and a constant row vector and solve for each variable column
* You can double check the answer using the inner product or [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html#numpy.dot).
* You can use the `@` to produce the dot product of two arrays.
```

When we perform operations on our data
all at once without using a loop,
0:00

the operation is said to be vectorized.
0:04

It's not only faster, but we usually
ended up writing fewer lines of code.
0:06

Out of the box, there are quite a few
NumPy functions that are available for
0:11

you to use that are already vectorized.
0:14

These are referred to as
universal functions, or ufuncs.
0:16

When there's one of these
universal functions available for
0:20

what you're trying to accomplish,
you want to ensure that make use of it.
0:23

Before you start writing a loop,
look first to the ufuncs.
0:26

Let's take a look at the more popular
examples of these universal functions and
0:30

then I'll show you where to learn more.
0:35

Here are my notes from the linear
algebra exercise that we just did.
0:37

So, there's a module for
linear algebra and it's linalg.
0:41

There's a link to
the documentation if you want it.
0:45

And you can also solve for a system of
equations using the solve function,
0:47

which was amazing, right?
0:50

We just made that two dimensional matrix
and we had a constant row vector and
0:51

we're able to solve for each variable.
0:55

And then we are able to double check,
using the inner product or dot.
0:57

Again, there's a link
to the documentation.
1:01

And again, in Python 3 and
up, you can use the @ sign
1:03

there to produce a dot product
of two arrays, awesome.
1:08

Some of that abstraction that
was going on in the linalg solve
1:13

function was using some vector math,
right?
1:16

And this is super common in just about
every direction you had with NumPy.
1:19

You'll want to perform some
operations on two vectors together.
1:22

Let's take a look at what
that looks like real quick.
1:27

So how about we make
a couple of example arrays?
1:30

Let's go down here.
1:33

And let's make some example arrays.
1:35

So for our purposes,
we'll make one dimensional arrays.
1:37

And let's unpack some into a and b,
and we'll just use a split method.
1:41

So we'll say np.split,
1:46

let's see let's make
a array of one through ten.
1:49

So we'll say np.arange and remember it's
exclusive but we'll start with 1 and
1:55

we'll go to 11 up to and
not including eleven, it's exclusive.
2:01

And the last parameter of
split is how many you want.
2:06

So we wanna take one through ten and
split it into two.
2:09

And then let's just double check we got
what we wanted, we'll print those out.
2:12

Awesome, so we have 1 through 5 and
6 through 10.
2:16

So what happens when
we add these together?
2:20

Now, I know in regular Python world
this would make a new list with all of
2:24

the values, right?
2:28

Let's explore happens when
we do this within nd arrays.
2:29

So we'll say a plus b,
that's pretty cool, right?
2:32

It does vector math, it adds each
element as they line up right?
2:37

So we have 6 plus 1 is 7,
and we have 7 plus 2 is 9,
2:41

and 8 plus 3 is 11, and so on.
2:46

So the plus operator has been
overloaded for these nd arrays, and
2:49

I bet then the other ones must be too,
right?
2:54

So what happens if we do a minus b?
2:57

We get an array of negative 5, right?
3:01

So write 1 minus 6 is negative 5, 2 minus
7 is negative 5, and then of course,
3:03

the order matters,
just like in math, right?
3:07

So if we do b minus a now,
we should get all 5s, awesome.
3:10

So then does that mean
that we can do a times b?
3:14

It does, awesome.
3:18

And so then what do you think happens
when we add a scalar to that array?
3:20

So what happens when I do a plus 2?
3:25

Interesting, look at that.
3:29

It took the right side, this two and
it applied it to every value,
3:30

so 1 plus 2, 2 plus 2, 3 plus 2.
3:37

It's almost as if there was a right
side that was the same size,
3:41

something like this.
3:46

So there's this method that you can do.
3:48

You can say np.repeat, so
we wanna have a, that's 5.
3:49

We're we have five,
an array of 2 that's 5 long.
3:55

So that would look like that.
3:59

And so basically, the same thing is
happening if we did a plus that, right?
4:00

We get the same thing.
4:06

So it stretched that two.
4:07

That's pretty neat, right?
4:10

So this ability to just stretch the scalar
value across all of the items is
4:11

called broadcasting.
4:16

The value 2 is broadcasted
across the array named a.
4:17

Broadcasting is handy because we don't
actually need to create this array, right?
4:22

We don't need to do that ourselves.
4:27

The equation automatically
assumed what we meant.
4:28

Now you'll see broadcasting
relied upon quite a bit.
4:31

These operators have
all been overloaded and
4:35

are actually using universal functions or
ufuncs.
4:38

And remember, ufuncs are vectorized.
4:42

See how we didn't have to loop
through the array ourselves.
4:46

It did all that looping
behind the scenes for us.
4:48

Let's pop over to some documentation for
these ufuncs.
4:51

One of the great things about this
weird naming is I can just search for
4:55

ufuncs and
I'm gonna find the documentation.
5:00

There it is, the first hit.
5:03

My wife also says that to me every
time I come back from the gym,
5:05

she says, ew, you funks.
5:09

That joke really stinks.
5:11

And I apologize in advance for
5:13

making you think about that bad joke
every time you here the term ufuncs.
5:14

This is a helpful page as it details the
rules of broadcasting as well as universal
5:19

functions and broadcasting, again,
that array structuring that we just saw,
5:23

which is super important
in vectorized operations.
5:27

Again, our no looping functions.
5:30

So in the table of contents on this page,
which is not showing cuz it's so big,
5:33

I'm gonna shrink this in one, there we go.
5:36

So here's this table of contents.
5:38

Over here we're gonna take a look
at these available ufuncs.
5:40

Cool, I'm gonna blow this up one
more time, now that we're here.
5:46

And so here's the available ufuncs, and
what we can look about, here's the add.
5:49

We just did that,
let's go ahead and click into it.
5:53

So you'll see that if you use the style of
calling the function that we have extra
5:58

arguments.
6:03

They're optional, right,
so they're where = true.
6:04

For example, you can choose which
specific elements to operate on, okay?
6:07

And you can choose to cast it
to something else as well,
6:12

if that's what you're looking for.
6:15

But these first two are units,
this X1 and X2, that's the left and
6:16

the right that we were talking about.
6:20

So you'll notice here in the description
that it says that they must be
6:22

broadcastable, which is defined in those
broadcasting rules that we skipped over
6:26

just a bit ago in
the ufuncs documentation.
6:31

Let's scroll down here,
cuz there's some great examples.
6:34

So you'll see that you can add just fine,
you can add scalar values together.
6:36

So if you have a scalar plus a scalar
it returns what you think it would.
6:40

And the second one is showing another
example of broadcasting, which I think we
6:45

should explore, because this is showing a
3 by 3, and then it's adding by a 3 array.
6:49

So let's go ahead and
let's copy and paste this.
6:54

So I'm gonna copy this over to our
notebook, and I don't know about you, but
6:58

sometimes I have a hard time seeing
what it is exactly that's happening.
7:03

So I like to say x1, x2 and
now I can take a look at both of then and
7:07

I'm gonna move this down here.
7:11

One of the great things about
the notebooks is it lets you keep those
7:14

chevrons in there, those greater then
signs, it's not gonna bother anything.
7:17

So that's awesome for copying and pasting.
7:20

So there we go, so we have our array
here and this other array here and
7:23

what's gonna happen is we
are going to add those together.
7:28

Let me get rid of this.
7:32

We're gonna add those together.
7:33

So let's take a look and see what happens.
7:35

So it broadcasted this
array over each row.
7:37

So 0 plus 0, that's 0.
7:40

1 plus 1 is 2.
7:42

2 plus 2 is 4.
7:44

And then again, it took that to
the second one, it broadcasted there.
7:45

So 0 plus 3 is 3.
7:48

1 plus 4 is 5.
7:50

See how it's going there?
7:53

It's pretty cool, right?
7:54

And also, scalars can be broadcast
to multi-dimensional arrays.
7:56

So if we also do np.add,
2, we can see that
8:00

the 2 is added to every
single one of the values.
8:05

See how it's broadcasted
to each end every value?
8:11

So let's switch back to our documentation.
8:14

So there's some more math operations.
8:17

These are what was being called
when we were doing the overloading
8:19

basically, right?
8:22

So there's subtract, multiply, divide.
8:23

Let's get through all these.
8:25

Square root, that's handy.
8:28

All right, trigonometry functions.
8:31

Super handy, that is if you need them.
8:33

I know that trig can trigger some people.
8:35

Don't let it.
8:37

The important thing to remember is that
you can run these functions against all of
8:38

the values in your array all at once,
which is super powerful.
8:43

If you need to use these,
you'll be very happy.
8:47

So look at these here.
8:50

Here's hypot for hypotenuse, be able
a squared plus b squared equals c squared.
8:51

This creates an array of all the C's.
8:57

Pretty nice, right?
9:00

Here's some low level bit twiddling
functions, again handy if you need it
9:03

if you're dealing with binary data and
you need to move some stuff around.
9:06

And then here's some comparison functions
and this too is operator overloaded.
9:10

Remember when we used the less than sign
to see all the minutes that were less
9:14

than 60?
9:18

Well, that was using this less ufunc.
9:19

And then when we wanted to check
that that comparison was true and
9:21

it was greater than 0,
we used logical_and.
9:26

Now, look here's that warning about
not using the word and or or, but
9:30

instead using the bitwise and
and or signs.
9:35

And here's another warning that we
saw where we needed to make sure that
9:38

the order of operations was used, right?
9:41

So it's again, warning about the fact
that 2 and a is evaluated first, awesome.
9:43

This is reminding you to
do like we just learned.
9:48

Remember when we used that less than 60?
9:51

The 60 is scalar and we just broadcasted
that to each and every element for
9:54

a comparison.
9:58

Broadcasting is pretty
straightforward most of the time,
9:59

it's usually either a scalar or
a row, like we saw.
10:03

However, if you end up
seeing it happening and
10:06

scratch your head about what's going on,
you should bring this page back up, right?
10:09

So bring this page back up, cuz up here
at the very top, [LAUGH] sorry for
10:14

that scrolling, that might make you sick.
10:16

This broadcasting here.
10:18

These rules are detailed out, and
10:19

it's super handy if you can't
figure out what's going on.
10:21

And it's nice to review from time to time.
10:25

I don't wanna overload
you with these rules,
10:27

I just want you to know
where to find them defined.
10:31

And that's ufuncs for
you, super handy, right?
10:35

You definitely want to
lean heavily on them.
10:38

There is optimization that happens
at the core NumPy level and
10:41

you can actually even write your own.
10:44

Check the teacher's notes for
more information.
10:46

So why don't you jot down some notes about
ufuncs and vectorization in general,
10:49

maybe toss in a couple words about
broadcasting, and I'll do that too.
10:53

And after the break we'll review them and
check out some more handy routines.
10:57

You need to sign up for Treehouse in order to download course files.

Sign up