We ❤️ Open Source

A community education resource

12 min read

Learning to program: 2 ways to copy files in C

Advance from using a loop to a memory buffer to read and copy files in C.

Whenever I introduce someone to programming, whether that’s in an article or in a “teach yourself” video, I like to use methods that are easy for a beginner to understand. But the way you teach programming isn’t always the way you’d actually implement it in real life. There are always trade-offs, like performance or security, to balance with “making it easy to learn.” But the first time you learn something, it helps to keep things simple and build up to the other stuff once you understand the basics.

From simple to more complex examples

For example, if I were teaching someone about how to write their own version of the cat command in C, I would use obvious variable names and make the program as straightforward as possible. One simple implementation might look like this:

#include <stdio.h>

int main(int arg_count, char *arg_list[])
{
   int item;
   FILE *text_file;
   char letter;

   for (item = 1; item < arg_count; item = item + 1) {
       text_file = fopen(arg_list[item], "r");

       if (text_file != NULL) {
           do {
               letter = fgetc(text_file);

               if (letter != EOF) {
                   putchar(letter);
               }
           } while (letter != EOF);

           fclose(text_file);
       }
   }

   return 0;
}

This is a very simple version of cat that reads a list of files from the command line, which are stored in the arg_list array. The program opens each file using a file pointer called text_file then reads the file one letter at a time using fgetc and prints the letter using putchar.

But while this version is very easy to read, it’s also very monolithic; everything happens inside the main program. That’s okay for a short program like this to teach the basics, but a better way to write the cat program is to break out the “read a file and print its contents” part into a function that is a bit more flexible. For example, we could rewrite this cat program like this:

#include <stdio.h>

void cat(FILE *in, FILE *out)
{
   int ch;

   while ((ch = fgetc(in)) != EOF) {
       fputc(ch, out);
   }
}

int main(int argc, char **argv)
{
   int i;
   FILE *pfile;

   for (i = 1; i < argc; i++) {
       pfile = fopen(argv[i], "r");

       if (pfile) {
           cat(pfile, stdout);
           fclose(pfile);
       }
   }

   return 0;
}

This puts the important stuff in a function called cat that reads from a file pointer called in and prints to a different file pointer called out. When we call the cat function, we can specify a different input and output. In this case, the main program only uses stdout or the standard output when calling the cat function – but in theory, this way of writing the cat function to write to a file pointer instead of using putchar makes it more flexible in case we ever decide to write directly to a different file later on.

This other version of cat also uses different variable names like i and pfile, which is more common for very short programs that don’t need very descriptive variable names. Experienced programmers typically use i as a counter or index variable, and pfile as a pointer to a file. Otherwise, the program remains the same as the first simple version of cat.

But reading and writing one letter at a time isn’t the best way to do it. On modern systems with very fast CPUs and solid state drives, you might not notice the difference, but the performance impact of reading one letter then printing one letter will become more noticeable on slower systems.

To make this run a bit faster on these slower systems, you might instead read part of the file into memory (called a buffer) then write that buffer to the output. This is still the same basic algorithm of “read something, then write something,” but it’s able to do more at once, so it’s much faster even on slow systems. For example, we might update the cat function one more time to read a file into a buffer using fread, then write it back out using fwrite:

void cat(FILE *in, FILE *out)
{
   unsigned char buf[128];
   size_t numread;

   while (!feof(in)) {
       numread = fread(buf, sizeof(unsigned char), 128, in);
       fwrite(buf, sizeof(unsigned char), numread, out);
   }
}

Since the core functionality is in the cat function, we only need to update that part of the program; the main program can stay the same. This function creates a buffer of 128 bytes; an unsigned char is exactly one byte long. The while loop continues until it reaches the end of the file. Within the loop, it reads a bunch of data (up to 128 bytes) into the buffer using fread, and keeps track of how much it read using the numread variable. Then it writes that data back out using fwrite.

This is basically the same cat program that we wrote above, but it runs much faster even on slow systems because it reads a bunch of data at once. That way, the operating system can just read some data without having to keep going back to the file to read one letter at a time.

However, I wouldn’t use the “buffer” version of the program to teach someone about programming—at least, not right away. It’s more complicated and difficult to understand. Instead, I’d start with the “one letter” version, and move up to the “buffer” method once we’d learned the basics about programming. The trade-off here is “easy to learn” with “runs fast.”

Running on slow systems

It can be hard to imagine what I mean by a “slow” system. Today’s computers are very fast; my computer at home has a 4-core 8th Generation Intel Core i3-8100T running at 3.10 GHz, with NVMe M.2 storage, and my system is already a few years old. On a system that fast, reading and writing one letter at a time runs at basically the same speed as reading and writing a bunch of data at once using a buffer.

But not every system is that fast. For example, what we recognize as the personal computing era started in the late 1970s with the Commodore PET, TRS-80, and Apple II computers. In 1981, IBM released the IBM Personal Computer 5150, which became the basis for “PC” computers built after that. My 2019 PC can trace its lineage back to the original 1981 IBM PC.

In 1981, IBM used a very simple operating system called DOS, the Disk Operating System, provided by Microsoft. And DOS is still around, as the open source FreeDOS Project. FreeDOS is a more modern version of DOS, but it’s still DOS.

Because DOS has certain assumptions about the hardware it’s running on, you aren’t likely to run DOS directly on very new computers. Instead, most people run FreeDOS in a virtual machine or a PC emulator of some kind. And that provides an excellent opportunity to demonstrate running a slow system, because we can artificially slow down a virtual machine.

For this demonstration, I’ll use the QEMU virtual machine. QEMU supports Linux KVM, or kernel virtual machine, which makes the “guest” operating system run at almost native speeds. Without KVM, the “guest” runs through software emulation, which is much slower. So we can simulate a much slower system by running QEMU without KVM support.

A program to copy files

Let’s write a simple FreeDOS program that reads data from one file and writes it to another file. This is basically the same as the Linux cp command, or the COPY command on DOS.

As with the cat sample program, one way to write this program is to break out the core behavior into a function called cpy which reads from one file pointer and writes to another file pointer. In the simplest case, this function reads just one character at a time using fgetc and writes each character one at a time with fputc:

#include <stdio.h>

void cpy(FILE *in, FILE *out)
{
   int ch;

   while ((ch = fgetc(in)) != EOF) {
       fputc(ch, out);
   }
}

This is the same as the cat function we wrote above, and is easy to explain to a new programmer: the function reads a character, then writes that character, and again, until it reaches the end of the file.

But while that’s easy to explain, it isn’t very fast to run. So a better way to write this function is to read a bunch of data at once, then write that to the output:

#include <stdio.h>

#define BUFSIZE 128

typedef unsigned char byte_t;

void cpy(FILE *in, FILE *out)
{
   byte_t buf[BUFSIZE];
   size_t numread, numwrite;

   while (!feof(in)) {
       numread = fread(buf, sizeof(byte_t), BUFSIZE, in);

       if (numread > 0) {
           numwrite = fwrite(buf, sizeof(byte_t), numread, out);

           if (numwrite != numread) {
               fputs("mismatch!\n", stderr);
               return;
           }
       }
   }
}

This uses a typedef statement so we don’t need to keep typing unsigned char everywhere, but it’s otherwise the same as the “buffer” version of the cat function from above. I’ve also added some extra features here, because we know we’ll be copying files, and it’s very important to know that things work along the way. For example, the function has a little extra code to detect if the amount of data it wrote differs from the amount of data it read, and aborts with an error message if that happens.

Because both of the functions take the same function arguments, we can write just one main program that uses either version of cpy function:

#include <stdio.h>

void cpy(FILE * in, FILE * out);

int main(int argc, char **argv)
{
   FILE *src, *dest;

   /* check command line */

   if (argc != 3) {
       fprintf(stderr, "usage: cpy src dest\n");
       return 1;
   }

   /* open files */

   src = fopen(argv[1], "rb");
   if (src == NULL) {
       fprintf(stderr, "cannot open %s for reading\n", argv[1]);
       return 2;
   }

   dest = fopen(argv[2], "wb");
   if (dest == NULL) {
       fprintf(stderr, "cannot open %s for writing\n", argv[2]);
       fclose(src);
       return 3;
   }

   /* copy */

   cpy(src, dest);
   fclose(src);
   fclose(dest);

   return 0;
}

That’s a longer main program than the cat program used, but it also includes some extra code to detect errors when opening files, or if the user didn’t provide the right number of options on the command line. The basic outline is:

  1. Open the input file as src
  2. Open the output file as dest
  3. Use the cpy function to copy from src to dest
  4. Close both files

If we save the “one character at a time” version of the cpy function as cpy1.c and the “buffer” version as cpybuf.c, and the main program as cpy.c, we can compile two separate programs. I’ll use the open source OpenWatcom C compiler, which we include in the FreeDOS distribution:

wcl -q cpy1.c cpy.c
wcl -q cpybuf.c cpy.c

Testing on a slow system

With these two versions of the cpy program, we can see how fast or slow these run on FreeDOS, when running QEMU with and without KVM support. To run this test more consistently, I wrote a DOS “batch” file to copy a 9 MB zip file, then use the COMP command to compare the original zip file with the copy. This also uses RUNTIME (which you can install from the FreeDOS distribution) to tell me how long it takes to run each command. My TEST.BAT file looks like this:

@ECHO off

set SRC=C:\TEMP\FILE.ZIP
set DEST=C:\TEMP\FILE.OUT

echo using CPY1 ..
runtime CPY1 %SRC% %DEST%
COMP %SRC% %DEST%

echo using CPYBUF ..
runtime CPYBUF %SRC% %DEST%
COMP %SRC% %DEST%

On Linux, I started a new instance of FreeDOS running in QEMU, with KVM. Running TEST.BAT gives this output:

using CPY1 ..
Run time was 29.615385 seconds
Files compare OK.
using CPYBUF ..
Run time was 29.285714 seconds
Files compare OK.

KVM is the Linux kernel’s virtual machine accelerator, and it does a great job of helping the “guest” operating system run really fast inside QEMU. In this case, copying a 9 MB file one character at a time (with cpy1) and by using a buffer (with cpybuf) both take about 29 seconds.

But if I restart the virtual machine and don’t enable KVM support, then QEMU has to process all machine instructions through software. That slows down most operations. For this example, running QEMU without KVM is probably closer to running FreeDOS on real hardware from the late 1990s era. You can see it takes much longer to copy the 9 MB file with these settings:

using CPY1 ..
Run time was 105.879121 seconds
Files compare OK.
using CPYBUF ..
Run time was 23.846154 seconds
Files compare OK.

Copying the file by reading a bunch of data at once (using a buffer) takes the same amount of time as before, about 24 seconds. But reading and writing files one character at a time is much slower on this system, about 105 seconds (that’s 1 minute, 45 seconds).

Teach simple methods first

The simplest methods can be much easier for beginner programmers to learn, but the trade-off is these basic implementations aren’t always the best in the real world. But I still like to use the more simple methods when I teach programming. My approach to teaching programming is “learn the basics first” then “use that to learn the more complex stuff.” And I still think that’s the right approach. Just keep in mind that the simplest version isn’t always the best version, even though it does the same thing.

About the Author

Jim Hall is an open source software advocate and developer, best known for usability testing in GNOME and as the founder + project coordinator of FreeDOS. At work, Jim is CEO of Hallmentum, an IT executive consulting company that provides hands-on IT Leadership training, workshops, and coaching.

Read Jim's Full Bio

The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.

Save the Date for All Things Open 2024

Join thousands of open source friends October 27-29 in downtown Raleigh for ATO 2024!

Upcoming Events

We do more than just All Things Open and Open Source 101. See all upcoming events here.

Open Source Meetups

We host some of the most active open source meetups in the U.S. Get more info and RSVP to an upcoming event.