Upon the file ending you will see the end
event, in the handler of which we will end our response by calling res.end.
Thus, the outgoing connection will be closed for the file has been completely sent. The resulting code is quite versatile:
var http = require('http'); var fs = require('fs'); new http.Server(function(req, res) { // res instanceof http.ServerResponse < stream.Writable if (req.url == '/big.html') { var file = new fs.ReadStream('big.html'); sendFile(file, res); } }).listen(3000); function sendFile(file, res) { file.on('readable', write); function write() { var fileContent = file.read(); // read if (fileContent && !res.write(fileContent)) { // send file.removeListener('readable', write); res.once('drain', function() { //wait file.on('readable', write); write(); }); } } file.on('end', function() { res.end(); }); }
It executes a pretty basic algorithm of sending data from one stream to another by using the most standard methods of the streams readable
and writable
. Of course, Node.js developers have taken this thing into consideration and added its optimized realization to a standard stream library.
The respective method is called pipe
. See the example:
var http = require('http'); var fs = require('fs'); new http.Server(function(req, res) { // res instanceof http.ServerResponse < stream.Writable if (req.url == '/big.html') { var file = new fs.ReadStream('big.html'); sendFile(file, res); } }).listen(3000); function sendFile(file, res) { file.pipe(res); }
All readable streams have it, and it works as follows: readable.pipe
(where you write, destination). Moreover, besides it is just one string, there is one more bonus – for example, you can pipe
the same input stream to several output streams:
function sendFile(file, res) { file.pipe(res); file.pipe(res); }
For example, except for a client response, we will output a standard process output, too:
file.pipe(res); file.pipe(process.stdout);
So, we launch it. We see it both in a browser and in a console
. Is this code ready for industrial use? Or are there any other aspects we need to consider?
First, we should pay attention to the fact there is no work on errors and mistakes. If a file is not found or malfunctioning, the whole server will fail. We do not need this scenario. So, let us add this handler:
function sendFile(file, res) { file.pipe(res); file.on('error', function(err) { res.statusCode = 500; res.end("Server Error"); console.error(err); }); }
Now we are closer to real life, and some guides consider such code as a good one, but it is not. You may not apply this code to a real-life server. But why? In order to demonstrate the problem let us add some extra handlers to the events open
and close
for the file.
function sendFile(file, res) { file.pipe(res); file.on('error', function(err) { res.statusCode = 500; res.end("Server Error"); console.error(err); }); file .on('open',function() { console.log("open"); }) .on('close', function() { console.log("close"); }); }
Let us launch it. Update the page. Do it several times and look at the console
. Notice, the file is getting reset and it is quite normal that the file gets opened. Later it will be fully sent and closed.
And now let us open the console
and launch the utility curl
, which will download this url:
http://localhost:3000/big.html
with a speed limit of 1KB/sec:
curl --limit-rate 1k http://localhost:3000/big.html
If you work under Windows, this utility can be easily found and installed. So, launch it. The file gets opened and downloaded. Everything seems to be ok.
Push Ctrl+С
, end the downloading process. Pay your attention to the fact there is no close . Try it again. It turns out, if a client has opened the connection, but closed it before the download ending, the file stays hung up.
And if a file stays opened, it means, first, all associated to it structures has remained in the memory; second, operational systems have a limit on the number of simultaneously opened files; third, a respective stream object stays forever in the memory together with the file. And the whole closure, where it is contained.
In order to avoid this problem and its consequences, it will be enough just to catch the moment, when the connection is closed and make sure the file is closed, too.
The event we are interested in is called res.on('close')
. You won’t find this event in the ordinary stream.Writeable
, which means it is exactly the extension of a standard stream interface. Just like files have close
, the response object ServerResponse
also has close
. But the meaning of the latter close
cardinally differs from the meaning of the former. It is very important because it is a normal closure (a file gets always closed in the end) for the file stream close
, while for the response object close
it is a sign the connection has failed. Normal closure doesn’t show close
, but finish
. So, if the connections has been broken, we need to close the file and free its resources because we’ve got nobody left to deliver the file to. For that purpose, we call the stream method file.destroy
:
res.on('close', function() { file.destroy(); });
Now everything will be all right. Let us check once more. Launch it. So, we can transfer our code to a real-life server.
You can find the lesson code in our repository.
The article materials were borrowed from the following screencast.
We are looking forward to meeting you on our website soshace.com