Phil Dreizen

React Tic-Tac-Toe

Professionally, I work as backend developer on a project that uses React in the frontend. I do have experience working with our React code of course, but it's far from the main thing I do.

Since I'll be using React in a bigger personal project I'm working on I wanted to learn the fundamentals in a formal way. I took an intro React course on egghead.io. Though the course was fine, I have to say, I just think using a book or written tutorial just works better for me. The Road to React is one of a few well regarded books on React, and so I worked through all the chapters. The advice at the end of the book is good: just go write a react project. Don't do any more tutorials.

So here is my first react "projectlet" - an implementation Tic Tac Toe. Nothing fancy here, just a working version of tic-tac-toe. It might be fun to add in an AI opponent, but that really has little to do with React. And of course, there's plenty of room to improve the look.

FYI: I'm really falling behind on my TIL posts! Today I'm talking about something a few weeks old.

Troubleshooting slow NFS transfer speeds

I have several hundred gigabytes worth of very small files I'm transferring from my laptop to my NAS server via an NFS mount over wifi. You'd think copying some (okay, a lot) of files around would be trivial, but it turned out not to be so easy.

My first attempt was perhaps naive. I simply used mv:

> mv $src_dir $dir_on_nas

But it was taking forever. Many hours later I gave up and decided to try again with a tar pipe. I tried something like:

> cd $src_dirtar cf - . | tar xf - -C $dir_on_nas

I ran it overnight and it did not get very far. In hindsight, I'm not sure why I thought this would be much different then using mv.

In any case, my next step was to take a look at what my transfer speed was. TIL about the pv which can measure the progress of data through a unix pipe. If you provide it the size of the data being transferred via the -s argument, it can also provide an ETA. And so the above tar pipe was modified to:

tar cf - . | pv -s $(du -sb . | awk '{print $1}') | tar xf - -C $dir_on_nas

pv displayed a paltry transfer rate between 1-2 MiB/s. This is just frustratingly slow, and explained why everything was taking so long.

My next step was to measure the network speed between my laptop and the NAS. TIL about iperf3, which sets up a socket that listens on one machine; other machines connect to the socket via iperf3 to measure the connection speed. (I should note, my NAS is a Synology, so installing iperf3 required jumping through a few hoops itself.).

#on the NAS
> iperf3 -s

#on my laptop...xxx is a stand in for the NAS ip
> iper3 -c 192.168.x.xxx

My transfer speed is not good: ~25MiB/s, but a lot better than ~1. Clearly, I need to troubleshoot the LAN itself, but for now, I'm going to leave it alone. At this point, I'd be happy with that kind of speed.

From the beginning I did suspect that the problem could be that I'm transferring a lot of small files. But I didn't think it would have this much of an impact. Cursory searches confirm that this is a pretty big problem, and I've found plenty of advice about various configuration changes I could make to the mount command to make the transfer go faster. But, if the issue really is just the number of files, I'm satisfied to transfer them over as a tarball and extract it later.

Before trying the single file tar, I decided to use dd to measure write speed to the NAS. Something like:

dd if=/dev/zero of=$dir_on_nas/test.dat count=100000

This showed a transfer speed of about 23MB/s, which is much more in line with iperf3. That really does suggest that I should be able to achieve that kind of speed if I packed up my smaller files into just 1 file.

So, this time, I tried transferring the files without extracting them (and notice I'm compressing here as well):

tar -zcf - . | pv -s $(du -sb $s | awk '{print $1}') > "$dir_on_nas/file.tar.gz" 

And yes pv is showing speeds of 22-23 MiB/s! Yay!

Except I should mention: I also tried this w/o compressing the data (leaving off the -z flag). Without compressing, pv reports the same transfer speed, but seemingly the transfer would "freeze" every few seconds. I still can't quite explain it. It has the "feel" of the pipe getting filled and the tar process getting blocked until the pipe empties, but wouldn't pv report a lower transfer rate as as result? Maybe not? I'm not sure.

In any case, now, with a single compressed file, the transfer speed is much faster now which is great.

tags: TIL, iperf3, nfs, pv, tar
Python Flask

I decided to learn flask for the purpose of implementing a RESTful API in Python. The two major choices seem to be Django and Flask. Flask is a lightweight option, which was more appealing to me for the flexibility that comes with that.

After doing the tutorial I do like it enough to try it out.

One thing I'm not liking from the tutorial is how a single function has to handle multiple methods (ie GET and POST), explicitly checking the request for which method is being used. It's hardly a deal-breaker.

I did briefly look at Flask-RESTful which addresses this by having each url map to a "resource" class which has methods that correspond to each HTTP method. So the class' "post()" method is mapped to a POST request. I think I prefer that, so I'll be looking at Flask-RESTful and its various forks (Flask-RESTplus, Flask-RESTx....and so on).

FYI: I'm currently most familiar with Java's Jersey framework for REST APIs.

BTW: Though Flask-RESTful's "resource" class is named the same as Jersey's "Resource" concept, I actually think vanilla Flask's Blueprint is closer.

tags: TIL, flask, python
Intro to "Today I Learned" (TIL) Posts

This post is an announcement: I'm taking some time to work on some personal projects and learn some new things, and I plan on sharing some of that here. I'll be working on bothlarger projects and smaller "projectlets." Stay tuned