In part 1, we developed a simple single-threaded HTTP server that processed requests synchronously. In part 2, we modified the server so that each request was handled in a separate thread, which improved concurrency but increased the load on the CPU and memory. In this article, we'll modify the server to use a ThreadPool which will help to improve concurrency while reducing the CPU and memory usage.
Contents
A Simple HTTP Server - Part 1
A Simple HTTP Server - Multi Threading - Part 2
A Simple HTTP Server - ThreadPool - Part 3
ThreadPools
Before we begin developing a ThreadPool, let me explain what a ThreadPool is for those who are not sure. Simply put, a ThreadPool is a collection of per-initialized idle threads that are waiting to do some work. When a thread in the collection receives a task, it executes it, once it's done, the thread goes back to waiting for a new task. Reusing threads in this manner allows us to achieve a greater level of concurrency without burdening the system's resources.
Let's take the basic design of a ThreadPool and write some code to handle multiple client connections in our HTTP server. We know that we need to create threads and initialize them so that they are ready and waiting to accept a task. A good place to start is to create a custom thread class that inherits from the
Thread class.
Listing 1
class SocketThread extends Thread{
public void run(){
// Do work here
}
}
The
SocketThread class above will need to be initialized and started before the server begins accepting client connections. When the server accepts a new client connection, we need to pass the connection to one of the threads in our pool. We can add a
setSocket() method to the
SocketThread class and call this method during the
accept client connection phase.
Listing 2
class SocketThread extends Thread{
public void setSocket(Socket socket){
// To-Do
}
...
}
The
setSocket() method, needs to store the socket in a collection so that it can be queued. The
run() method will then pop a socket from the beginning of the queue and continue the client-handling process. We can use a
LinkedBlockingQueue to store sockets as this type of queue is thread-safe.
Listing 3
class SocketThread extends Thread{
private final LinkedBlockingQueue<Socket> queue;
public SocketThread(){
this.queue = new LinkedBlockingQueue<Socket>();
}
public void setSocket(Socket socket){
this.queue.add(socket);
}
public void run(){
// Do work here
}
}
The
run() method gets called when the thread is started. Once the method has finished executing, the thread is terminated. To keep the thread alive, we need to stop the
run() method from completing. You may be tempted to use an infinite loop but this will cause the system's CPU to spike. Thankfully, the LinkedBlockingQueue class has a
take() method that blocks until an item in the collection becomes available.
In our case, the
take() method will block the
SocketThread until a socket is available in the queue. Once the socket becomes available we can continue handling the client request/response. At this point, the thread will no longer be blocked and so it will terminate when the
run() method completes. To keep the thread alive, we simply call the
run() method again making it a recursive method. Listing 4 below shows the complete SocketThread class which includes handling the client request and response.
Listing 4
class SocketThread extends Thread{
private final LinkedBlockingQueue<Socket> queue;
public SocketThread(){
this.queue = new LinkedBlockingQueue<Socket>();
}
public void setSocket(Socket socket){
this.queue.add(socket);
}
public void run(){
try{
Socket client = this.queue.take();
// Get A BufferedReader/BufferedWriter to handle reading and writing to the stream.
BufferedReader requestReader =
new BufferedReader(
new InputStreamReader(client.getInputStream()));
BufferedWriter responseWriter =
new BufferedWriter(
new OutputStreamWriter(client.getOutputStream()));
// Read all the data sent from
// the client before we send a response.
while (true){
String headerLine = requestReader.readLine();
if (headerLine.length() == 0){
break;
}
}
// How original is this?
responseWriter.write("Hello World\n");
responseWriter.flush();
// Closing the client connection will close, both the input and output streams.
client.close();
this.run();
}catch(Exception e){
e.printStackTrace();
}
}
}
We can now create several instances of
SocketThread to simulate a ThreadPool. Normally a ThreadPool will abstract away the creation of threads and manage an internal array of the threads but I want to keep the code simple, the reason will become clearer later.
Let's create two instances of
SocketThread in the main method and call the
start() method on them. When the threads start, the
run() method of each thread will wait for a socket.
Listing 5
import java.net.*;
import java.io.*;
import java.util.concurrent.LinkedBlockingQueue;
class HttpServer{
public static void main(String args[]){
SocketThread thread1 = new SocketThread();
thread1.start();
SocketThread thread2 = new SocketThread();
thread2.start();
...
}
}
When the server accepts a socket, we can call the
setSocket() method on one of the threads we've created. Since we have two threads, we need to find a way to decide which thread to use. We can solve this problem by alternating between each thread as shown below.
Listing 6
import java.net.*;
import java.io.*;
import java.util.concurrent.LinkedBlockingQueue;
class HttpServer{
public static void main(String args[]){
SocketThread thread1 = new SocketThread();
thread1.start();
SocketThread thread2 = new SocketThread();
thread2.start();
try{
// Create a new server socket and listen on port 9000
try (ServerSocket server = new ServerSocket(9000)){
// Continue to listen for client connections
int i = 0;
while (true){
// Accept a client connection. accept() is a blocking method.
Socket client = server.accept();
if (i % 2 == 0){
thread1.setSocket(client);
}else{
thread2.setSocket(client);
}
i++;
}
}
}catch(Exception e){
e.printStackTrace();
}
}
}
An alternative approach to alternating between the threads is to pick a thread with the least number of sockets in the queue. To achieve this, we could expose the size of the queue as a public method and then do a comparison between the two threads.
Earlier I mentioned that there was a reason why I wanted to keep the code simple and that's because Java has built in classes to help create ThreadPools. In the next article, we'll refactor our code to make use of the
ExecutorService API so we won't need to create and manage thread instances ourselves.