If you are familiar with the process outlined above or just generally familiar with full stack web development read on. This article will provide you with the answers to the following questions:

- How do users actually access this deployed application
^{1}? - Where does your application live?
- How is your application transferred from a far off server to a user’s computer?

To begin, let’s introduce some of the major players in this story. A **web client** is software. Its function is to translate user input into requests made to another computer called a web server. Web browsers (like Chrome, Safari or Firefox) are web clients. The computer that client software runs on has a unique numerical identifier attached to it called an **Internet Protocol (IP) Address**. Every computer on the Internet (this includes every client and server) has an IP address. IP addresses allow computers using the Internet Protocol to identify one another. The **Internet Protocol** is the set of rules for how computers connected to the Internet should route data to the computer they are sending data to.

A server is a piece of hardware much like an ordinary computer, but without a screen and a keyboard attached. The software running on a server are called **server processes**. On a web server lives the simple application described at the beginning of this article. You can think of your application being the server process on the server it lives on. A web server is a type of server that stores and delivers web pages to clients when a client asks for it. A web server "listens" for requests originating from some client.

The server you built using Express and Node handles requests made by a client. The server listens for requests and responds accordingly. This relationship - between a client that requests resources and the server that provides these resources - can be described by the **client-server model**. This model is based on a network architectural specification called **Representational State Transfer (REST)**. REST can be thought of as a set of rules for how a network is organized. It should be noted that REST is just one set of rules for a how a computer network can be organized - others exist too. A network whose organization follows these rules is considered *RESTful*. The Internet abides by a RESTful specification. A RESTful network follows these general constraints^{2}:

**Statelessness**: This refers to the way requests are made. A request made by a client to a server has all necessary detail in order for the server to completely understand and subsequently fulfill that request. No prior knowledge is required by a server to fulfill a client’s request.**Client-Server Model**: The network follows the client-server model: clients ask for resources from a web server and a web server provides these resources.**Uniform resource**: Each part in a RESTful system interacts with every other part of the system in a standardized way. Requests are “written” in a way that is understandable by any server in the network.

How is the client-server model and REST carried out in practice? The answer to this question can be mostly answered by understanding the **HyperText Transfer Protocol (HTTP)**, the **Transmission Control Protocol (TCP)** and the **Internet Protocol (IP)**. HTTP is a specification for how clients should request information from servers. It is how a web browser specifies what resources it wants from, or wants to add to, a server. HTTP utilizes TCP and IP. You can think of HTTP being an abstraction of how data is transferred between computers in a network and TCP/IP as describing the details for how data is actually transferred. TCP is the set of rules for how information should be transferred over the Internet. TCP specifies that data transferred over the internet must be broken up into packets. It also specifies how these packets are reassembled into their original message once they have all reached the intended IP address. A **packet** is a chunk of information of some specified size - size here refers to the number of bytes that represent a packet - that can be transferred over a network. The size of a packet is dictated partly by the physical limitations of the communication network being used. IP is responsible for specifying the path by which a packet of information should travel through the Internet so it reaches its intended recipient.

When a user wants to visit your deployed application they enter its **Uniform Resource Locator (URL)** into their favorite search engine. Below is an example of a URL. A URL identifies a particular resource on the web. It specifies the protocol (like http) in which the request is being made, a domain name which is used to find the IP address of the web server you are requesting resources from, and includes the name of the files or resources you are requesting.

To reiterate, entering a URL into a search engine initiates a request for some resource that exists on some remote server. The response your browser gets after making a request is the IP address of the server hosting the resource you are requesting, as well as the port number of the server that will handle your request. A

Your browser uses this IP address and port number to connect to this server and make HTTP requests for any resources it wants. It then loads all the resources requested onto your machine and constructs the DOM. Your application now appears on a user’s machine.

So there you have it. After reading this you should understand the basics of how resources are transferred over the Internet. Obviously, there are many details not mentioned here like HTTP verbs, the DNS and ISP’s.

For more information on this broad topic see:

- this three part article on REST, HTTP and the structure of the Internet
*Programming JavaScript Applications*by Eric Elliot. An online version of the book can be found here.- this resource is good for those wanting a particularly in-depth treatment of the subject matter

^{1. Everything discussed in this article applies to a simple application with relatively few users. Once an application is more widely used additional engineering concerns arise.↩}

^{2. There are more constraints to a RESTful network. I am only listing three of them.↩}

Interested in learning more about JavaScript? Visit us at http://www.makersquare.com/.

Big-O notation is borrowed from mathematics. A branch of computer science called computational complexity theory borrows this notation in order to classify how costly a problem is to solve. Computational complexity theory is concerned with classifying problems based on the resources needed to solve a problem programmatically (i.e., solve a problem using an algorithm). We typically think of resources in terms of time (How much time does this algorithm take to run?) - but resources can be anything an algorithm needs to run, like computer memory or network bandwidth.

We use big-O notation to state the worst-case **asymptotic time complexity** of an algorithm. When determining the **asymptotic** time complexity of an algorithm we are only concerned how the running time of an algorithm changes for very large input sizes.

Practically speaking, determining how an algorithm’s running time scales with large input sizes (i.e., scales *asymptotically*) makes sense. Most modern computers can run even the most inefficient algorithms in milliseconds so performance doesn’t matter in the small input size regime. Only when an algorithm begins to process larger input sizes does the efficiency of that algorithm become important.

Now I want to begin clearing up what I mean by *running time* - in doing so other related concepts will become clear. Running time is simply the total amount of time an algorithm takes to run. What the algorithm is doing during the time it runs can be thought of as being broken up into a series of constant time operations. What is an *operation*? An operation is any primitive computation that we assume a computer’s CPU can do in a roughly fixed amount of time. For example in the `average`

function below we assume that adding to `sum`

is a single, constant-time basic operation and that dividing `sum`

by `array.length`

is equal to 2 basic operations.

```
function average(array) {
var sum=0;
for (var index=0; index<array.length; index++) {
sum = sum + array[index];
}
return sum/array.length;
};
```

Running time is proportional to the number of operations needed to complete an algorithm. If we consider an algorithm as a sum of steps, or operations, each taking a constant number of time we can then define a general equation for the running time of an algorithm. This equation represents the number of operations an algorithm needs to take - with respect to input size - in order to complete. For `average`

we get,

Each iteration of the `for-loop`

takes a constant amount of time for each input. The step where `sum`

is divided by `array.length`

occurs precisely once.

Now we have translated `average`

into a function, namely T(n). If we were to

graphically represent T(n) we would plot n on the x-axis and T(n) on the y-axis. This plot would look roughly linear.

Procuring an equation like T(n) can be done for any algorithm.

Now let’s formally define big-O:

What this means is that at some input size n_{0} we can define a function f(n) that will always be greater than or equal to T(n) divided by some constant for all input larger than n_{0}. Looking at `average`

we could define a constant c = 2 and let f(n) = n, which corresponds to *O*(n). This would then give us n_{0} = 2. In other words,

Since f(n) = n then we say, by definition, that `average`

has a worst-case time complexity of *O*(n). Intuitively what this is saying is that beyond some point n_{0}, T(n) grows as fast or more slowly than f(n) times c. The graph above is demonstrating that. As you can see at n_{0} T(n) is less than f(n) times c for all values of n greater than n_{0}.

Now let’s talk about what inputs big-O requires. Big-O yields the **worst-case** asymptotic time complexity of an algorithm. Let’s say we have a sorting algorithm like `bubblesortCheck`

(see below). `bubblesortCheck`

cycles through an array of length n, n number of times, swapping any two consecutive integers in which a bigger integer precedes a smaller integer. Let's say it also has some constant time method called `arrayIsSorted`

that checks to see if the array is already sorted at each iteration. If the array is already sorted `bubblesortCheck`

terminates on the first iteration of the loop and returns the sorted array. Here is `bubblesortCheck`

in pseudocode.

```
function bubblesortCheck(array)
for each index in the array
for each index in the array
if arrayIsSorted is true
return array
else
swap any two unsorted consecutive integers
return sorted array
```

If you were to pass an already sorted array into this algorithm, `bubblesortCheck`

would sort in constant time, instead of *O*(n^{2}). An already sorted array is the best case input for this algorithm. The worst case input would be one that takes `bubblesortCheck`

the maximum amount of time (and requires the most operations) to run: this would be a reversed array. Each algorithm has a best and worst case input.

Big-O notation is only concerned with the worst-case input. It is an upper-bound on the running time of an algorithm.

Interested in learning more about JavaScript? Visit us at http://www.makersquare.com/.