Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
We often do multiple things at once on our mobile devices but take for granted the computing complexity involved. In this video, we'll start with a history lesson and look at what it means to execute multiple blocks of code simultaneously.
-
0:00
The work we did in the previous video is perfectly valid working code but
-
0:04
it's not good code, and moreover it's not obvious that it's not good code.
-
0:08
Before I explain why, a history lesson if I may.
-
0:11
In the early days of computing hardware limitations were a pretty big problem.
-
0:15
The amount of work that we could do was extremely dependent on the speed of
-
0:19
the processor, or the brains of the computer.
-
0:22
We could only do one thing at a time, and we had to wait until a single task was
-
0:26
done before we could move on to the next one.
-
0:29
In time, we innovated and made more powerful processors that were smaller than
-
0:33
their predecessors and could do more work in the same amount of time.
-
0:38
But soon we ran into another problem.
-
0:40
As processors got smaller,
-
0:42
we were limited in the amount of work we could do because of physical constraints.
-
0:47
Now hindrances breed innovation.
-
0:50
And soon we figured out we could add more cores or
-
0:53
more brains to the processor, thereby allowing us to do more work without
-
0:58
affecting the speed of the processor or the size.
-
1:01
Aha, problem solved.
-
1:03
Well, not really.
-
1:04
We have solved hardware issue but now we have to figure out how neatly
-
1:08
take advantage of all this extra cost to do even more work.
-
1:12
At first we came up with decent solution.
-
1:14
We created software that can do multiple thing simultaneously by creating
-
1:19
multiple threads to use these multiple cores.
-
1:22
We use the term thread to refer to a path of execution for code.
-
1:27
This concept is known as concurrency,
-
1:30
the notion of multiple things happening at the same time.
-
1:33
Now in the past, when you wanted to do multiple things, you just did them one
-
1:38
after the other since you could only execute one thing at a time.
-
1:42
With threading we could create multiple threads that use different cores to
-
1:47
execute tasks simultaneously.
-
1:49
This isn't an easy job, though.
-
1:52
Your applications have to know how many threads can be created based
-
1:55
on the number of cores there are.
-
1:58
You need to make sure your threads are executed in some order and
-
2:01
you can't let them interfere with one another.
-
2:04
It is a painful job and they put the burden on us, the developers.
-
2:09
Luckily, even more smart people came along to better solve this problem.
-
2:13
iOS does not rely on threads up front.
-
2:16
It takes an asynchronous design approach to solving concurrency.
-
2:19
A synchronous task means doing something once and waiting until it's done.
-
2:24
Asynchronous means doing something behind the scenes and continuing the main work
-
2:29
regardless of whether the background task is actually complete.
-
2:32
Now this is still done through the use of threads, but
-
2:34
luckily you don't need to worry about it.
-
2:37
The system handles it.
-
2:38
It's abstracted for us.
-
2:40
Instead of managing threads ourselves, we add our tasks to queues.
-
2:44
The system then takes care of creating the needed threads and
-
2:48
scheduling task behind the scene.
-
2:50
But why are we bothering with this, and how is this relevent to us?
-
2:54
You can think of iOS as a highway with multiple lanes.
-
2:58
Each lane here is a queue.
-
3:00
On this highway, we have vehicles speeding along.
-
3:03
Think of these vehicles as blocks of your code executing.
-
3:07
The main lane, the central one, is used for the user interface.
-
3:12
When a user taps, swipes, scrolls, or interacts with the user interface in any
-
3:17
way, this code is given top priority and added to the main queue.
-
3:22
We never want this main queue to get backed up,
-
3:25
because then our app would feel slow or even unresponsive.
-
3:28
When we make a networking call, If we have any issues, and
-
3:32
the request takes a long time because of a slow connection,
-
3:35
for example, it can block the UI if the networking code is on the main thread.
-
3:41
None of the user interface code would get executed, and
-
3:44
the user wouldn't be able to do anything with the app.
-
3:47
They would think it is unresponsive, or broken, and get frustrated.
-
3:51
All of our networking code should be done in a background queue.
-
3:55
One of the side limits, so that we don't block the UI from being responsive.
-
3:59
When the network request complete, we need to move back from the background queue
-
4:04
to the main queue to update our UI since all UI updates are done on the main queue.
-
4:09
And the code we just wrote is synchronized code.
-
4:12
This is like blocking a lane with a slow moving truck.
-
4:16
It's not obvious because our network is pretty fast, at least mine is.
-
4:20
And when I fetch the data, it returns immediately.
-
4:23
But if you had a slow connection and you're trying to update the UI at the same
-
4:27
time that would be like putting a slow moving truck in there.
-
4:31
The app would be unresponsive.
-
4:33
When we write asynchronous code, we move this truck to another lane, and
-
4:37
it keeps the traffic on the main lane flowing as normal.
-
4:40
We need to change our code here to be asynchronous.
-
4:43
Concurrency can improve the responsiveness of your code by ensuring that your main
-
4:48
thread is free to respond to user events.
-
4:51
But before we do that, we need to look at how we can write concurrent code in Swift.
You need to sign up for Treehouse in order to download course files.
Sign up