Meet our Interns: Tim Scalzo - Documentation Site Development

This is the last of several blog posts written by the high school interns that have been working with us this summer.

Hello! My name is Tim Scalzo and my project this summer was to help develop a website to host and share work done by the Let's Build Rockets team. The documentation website will make it possible for other people to access and learn from the work that we've been doing. This will be a dynamic website which is not something that I had a lot of experience in before I began work on the project. But as I worked over the summer I learned a lot about many different things.


The first thing we did was plan out what would be on the site, how it would look, and how it would get there. We decided there would be three basic things displayed on the site: Files, Articles, and Users. Each one would be stored in a database and would have specific data values associated with each one. They would each be a column in the database where each row is a single entity of that table's type. We also did some brainstorming on how the website should look, since we never got around to implementing actual styles or formatting during the summer, I won't get into the details. Putting it simply, we decided it should look similar to Wikipedia. Each article would have a table of contents, an author, content, and files associated with it.


I then built a very basic dynamic site using Next.js using [this tutorial]( It was a good way to get a handle on how dynamic websites work since I only have had experience with static sites in the past. Using the base website I wrote using this tutorial, I stripped it down to the code we needed in order to have a dynamically built website. Part of this tutorial was setting up a server that handles URL masking. This means anytime a page is requested by the user's browser, the browser has to go through the server. I wrote some functions that take URLs that match a pattern, loads a corresponding page, and then changes the URL to look nicer. This is called URL masking and it makes it possible to send information in URLs that the user can't see. This makes it a dynamic website because it can load information into the page dynamically. The informatiom isn't stored statically as a fully formatted webpage. Instead, it's stored in a database as little bits of information that can be accessed wherever they're needed.


We decided to use Postgres SQL for our database. Eric set up this development environment on a VM in his computer. I'm not sure, but it's possible the database has been moved to DigitalOcean for more reliable hosting. At first I SSHed into this server to work on the database. I then learned about Knex.js which is an SQL query builder. It makes it much easier to interact with the database. I learned about its syntax and was able to write some basic functions based on the information we'd have to pull from the database.

GET & POST Requests

Everytime you type a URL into your browser and hit enter, you are making a GET request to that server. Your computer is asking the server to send it the information that it is hosting. Once the browser receives the information it displays it as a webpage. You use GET requests everyday and you don't even realize it. A POST request is the opposite of a GET request, it sends information to the server and asks it to do something with it. In our website's case, the server is the database. Since this is a dynamic website it loads information from the database and constructs the webpages dynamically by using GET requests. In order to do this I wrote some simple functions that use Knex.js to ask for information from the database. I also wrote some Knex.js functions that send POST requests to the database to update it based on information that the user enters on the website. The thing about Knex.js functions is they need the code loaded on every page that they run. To make it easier to run them and quicker to loead each page, we created API pages on the server that run specific functions based on what information is fed into them. We're then able to use the information that is loaded on the API pages to create the pages of the website.

Google API

Since most of Let's Build Rocket's files are hosted on Google Drive, we decided that our documentation site should sync with it periodically. We used Google's API to pull metadata and store it in our database. The thing about Google Drive is you almost always need to prove that you have access to the files you're attempting to view or edit. There are a couple ways of authenticating with Google's API, and the first way we used was a 3rd-party NPM package that authenticated by logging into Google on a browser. This worked for users are editing a file online, but we realized it wouldn't work if the server wanted to update our database automatically. At this point I had already written some fairly complex functions to update the database and Google Drive. As is very often done when programming, I rewrote the functions to be simpler so they would work with Google's own NPM package for Google Drive. This ended up being a great decision because it made it possible to authenticate automatically (using a local file that has user secrets). After fiddling with the functions that would write to Google Drive, Eric and I figured out how to pull all of the metadata for files in a folder in Google Drive.

Browser Editor

As you may or may not know, PostgreSQL (PSQL) is cool but kind of slow. When I SSH into the database, I type an IP address and password into the terminal and it allows me to control the database remotely through PSQL. However, everything by nature is controlled by text commands. So editing an entry in the database takes a couple minutes of navigating to the table, entering the correct command, and then making sure you typed the command correctly. It's much simpler if you use Knex.js to write some functions. I mentioned earlier that I linked the functions to pages using the server's URL masking. However, it's kind of clunky to edit data by typing everything into a URL. So the next step was, as the title of this section says, creating a broswer editor. Instead of typing changes into the URL a user can type the changes into a contextual editor that makes it much clearer how to edit specific parts of each type of data. First I created three editors, one for each type of data (files, articles, & users). Each editor has fields for what the user can edit. I then started writing the code that would allow the editor to verify valid information had been entered into it and then send it to the database. But that's when summer ended. I hope to eventually finish the editor. I'll be sure to write another blog post when I do.

See Tim's work on GitHub