This is an update to my previous script to deploy Storybook to S3 via Gitlab Pipelines. We’re running a dev server on EC2 and I wanted to be able to deploy the project using SFTP.
Create an SSH key on your local machine and save it without a password using the following command:
sh-keygen -t ed25519 -C "GitLab SSH key"
You should have two new files in
id_ed25519— private key
id_ed25519.pub— public key
Add Key to Gitlab CI/CD Settings
Copy content of private key and go back to GitLab project. Navigate to
Settings -> CI/CD -> Variables -> Expand -> Add Variable. GitLab’s variable is a key-value pair. Name key
SSH_PRIVATE_KEY and paste private key in value field. Click
Add two more variables:
SSH_USER— name of the user on the remote server
SSH_HOST— IP address of remote server
Copy content of public key and go back to remote server. Login as the same user which you have specified in
SSH_USER GitLab’s variable. If you don’t have yet this user, it is time to create it.
Add SSH Key to Server
/home/<username>/.ssh. If directory
.ssh doesn’t exist, then create it. Paste the public key into
authorized_keys file. If you don’t have
authorized_keys file, create it.
Important Note: I ended up using node:16 for my script because my instance of Storybook was using Webpack 4 and the node:latest was using Node 18 which wasn’t compatible. If you’re using webpack 5, you should be able to use a later version. I’m also using node:16 instead of alpine or buster because I ran into issues adding the SSH keys. I’ll work on optimizing this later for performance but it works.
image: node:16 stages: - build - deploy build: stage: build script: - npm install - npm run build-storybook artifacts: paths: - storybook-static only: - develop - main deploy: stage: deploy artifacts: paths: - storybook-static before_script: - "command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )" - eval $(ssh-agent -s) - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - chmod 700 ~/.ssh - ssh-keyscan $SSH_HOST >> ~/.ssh/known_hosts - chmod 644 ~/.ssh/known_hosts - ls -la script: - echo "$SSH_USERNAME@$SSH_HOST" # clean up old storybook files - ssh -p22 $SSH_USERNAME@$SSH_HOST "rm -rf /home/ddl/mnaddl/public_html/*" # deploy artifacts - scp -P22 -r storybook-static/* $SSH_USERNAME@$SSH_HOST:/home/ddl/mnaddl/public_html only: - develop - main
I’ve been trying to build more CI/CD scripts using Gitlab to automate pipeline deployments for work. Here’s a useful one for building and deploying a React app to Amazon S3.
You’ll need to add a variable called S3_BUCKET_NAME to your repo or replace the variable with your bucket path.
stages: - build - deploy build react-app: #I'm using node:latest, but be sure to test or change to a version you know works. Sometimes node updates break the npm script. image: node:latest stage: build only: - master script: # Set PATH - export PATH=$PATH:/usr/bin/npm # Install dependencies - npm install # Build App - CI=false npm run build artifacts: paths: # Build folder - build/ expire_in: 1 hour deploy master: image: python:latest stage: deploy only: - master script: - pip3 install awscli - aws s3 sync ./build s3://$S3_BUCKET_NAME --acl public-read
I created a new personal resume site and decided I wanted to build a static site since it wouldn’t be frequently updated. I evaluated Nuxt, Gatsby, and a few others but settled on Jigsaw, a static site generator based on Laravel. I had never used it before and figured this would be a good learning experience while building something I needed. I was pleasantly surprised by how easy it is to use and setup, so kudos to the Tighten team for putting together such an elegant solution.
I wanted to get a CI/CD pipeline configured to handle the site’s deployment but couldn’t find any working tutorials, so I’m sharing my solution in case it helps others. I’m using Bitbucket for this since it’s a personal private repo, so I’m using Bitbucket Pipelines.
After you enable pipelines for your project, you’ll need to configure a Pipelines Repository Variable in your project. Go to the settings tab in your repo, and then select Repository variables:
Add 3 variables:
- USER_NAME – The SSH user name you want Bitbucket to use to connect to your server.
- PRODUCTION_HOST – Your domain that Bitbucket should connect to
- FOLDER – Folder Path where the site should be deployed to
Generate an SSH key (or use your own) and add it to your server under the SSH Keys tab in Pipelines:
I generated a new key and then added it to ~/.ssh/authorized_keys for the account.
Add this YAML snippet to your bitbucket-pipelines.yml in your root. This will use PHP 7.4, install rsync, node + npm, composer, and build the production version of the site to deploy to the specified folder.
The -aVP switch for rsync is to give me verbose progress feedback so I can see what’s happening. If you don’t need the detail, switch it to -a.
image: php:7.4-fpm pipelines: branches: master: - step: name: Jigsaw Build script: - apt-get update && apt-get install -y unzip - apt-get install rsync openssh-client nodejs npm -y - curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer - composer install - npm install - npm run production - rsync -avP build_production/ $USER_NAME@$PRODUCTION_HOST:$FOLDER --exclude=bitbucket-pipelines.yml --chown=www-data:www-data
I received a few errors when rsync ran. In case you run into them as well, here’s the list and fixes. The first was:
rsync: failed to set times on "$FOLDER": Operation not permitted (1)
--no-t to resolve that and then got a new error:
rsync: failed to set permissions on "$FOLDER": Operation not permitted (1)
which was fixed with adding the switch
--no-perms. My final rsync command became:
rsync -avP --no-t --no-perms build_production/ $USER_NAME@$PRODUCTION_HOST:$FOLDER --exclude=bitbucket-pipelines.yml --chown=www-data:www-data
I am working on a Laravel project and decided to use a Backblaze bucket as it’s cheaper for storage when compared to AWS S3. I couldn’t find a tutorial on how to get it working from scratch and I tested a bunch of Laravel B2 libraries that didn’t end up working. The good news is that you don’t need a special B2 plugin and instead can use the S3 package recommended by the Laravel docs.
If you haven’t added the flysystem-aws-s3 package, add it to your project using composer:
composer require league/flysystem-aws-s3-v3
Login to your B2 account and create your bucket with your required settings. Once created, you’ll want to create a new application key with the permissions you need for your app. You should get a confirmation once it’s generated:
Open your .env file and locate the settings for AWS. You’ll need to add one key that’s not there by default:
Match the settings in your .env from the application key to the values below.
AWS_ACCESS_KEY_ID=keyID AWS_SECRET_ACCESS_KEY=applicationKey AWS_DEFAULT_REGION=us-west-000 AWS_BUCKET=bucket-name AWS_ENDPOINT=S3 Endpoint
Now you should be able to call the Laravel storage system like normal:
I just spent a few hours setting up a Gitlab pipeline to deploy a Storybook.js site. Of course the end result ended up being much simpler than I made it out to be. Like everything else on my blog, I’m sharing in case anyone else can use the information to save time.
Just put this in your gitlab-ci.yml and it’ll take care of caching the node modules and building your static version of Storybook to deploy.
image: node:latest cache: paths: - node_modules/ stages: - build - deploy build: stage: build script: - npm install - npm run build-storybook -- -o storybook-static artifacts: paths: - storybook-static only: - qa - develop - master deploy: stage: deploy_to_aws # add your deploy code here
I ran into an issue where I had to move files from one system to another and was running into issues because files had been set as read-only, had the archive flag set, or both. It was causing the system to skip files which wasn’t acceptable. Normally you could just use Windows to clear it in bulk, but that could potentially mess up file permissions. I needed a way to automatically just clear all flags but respect permissions.
I did some searching and didn’t find a utility that would do the job and most of the solutions I found required Powershell which wasn’t available on the system I was on. I ended up writing a quick console application in C# to do the trick. I’ve made it free and open sourced it in case anyone wants to use it.
If you need just the app, you can find the release build here with instructions. The app also prompts for input to make things a bit easier to use. There’s no install, no tracking or metrics, or anything else related to privacy concerns in this app. It’s a simple throwaway utility to get the job done and move on.
If you want to see the source code, that is available here:
Please note that I did this in about 10 minutes for my own use so error handling is pretty much non-existent. I mention this because I did run into one issue where Windows was somehow seeing a folder with files in it as a file and it couldn’t be deleted or renamed and the utility couldn’t get past it until it was resolved. I didn’t spend much time debugging and just used my Mac to rename the folder and Windows was able to recognize it after the change, so the utility was able to continue processing.
I inherited an old site that someone else setup that is just a basic static HTML, which was deployed using a git pull on the server. I wanted to automate the deployment, and instead of using rsync as the site will be re-built, I realized I could just configure the Bitbucket Pipeline to use SSH and run the pull command. This is probably a fringe case but here’s the bitbucket-pipelines.yml in case anyone finds it useful.
Add the repository variables for $USER, $SERVER, and $FOLDER with the appropriate values and then you should be able to run the deployment.
pipelines: default: - step: script: - pipe: atlassian/ssh-run:0.2.8 variables: SSH_USER: '$USER' MODE: 'command' SERVER: '$SERVER' COMMAND: 'cd $FOLDER && git pull'
I briefly joined my wife at her practice to help her grow the business and figure out how to make things more efficient. One of the things I learned is that my wife created a sign-in sheet for the office in Microsoft Word. Every week she would open the file and manually enter the date for each day of the week and then print out the documents. I took over the responsibility for a month and it annoyed me due to how inefficient the process was and I decided to automate the entire thing. I couldn’t find a solution to the problem online so I had to roll my own and am sharing the code in case someone else can benefit from it.
The script will calculate the first day and last day of the month and then do a loop to append the date in the “Day, Month day, Year” format (i.e. Thursday July 17, 2019) to a text field.
There are a few important steps involved to get the script working as is:
- Create a Word doc with the first page that you want to duplicate.
- Add a text field from the developer tab. To copy and paste the code below as-is, you’ll need to name it txtDate. This is where the date will be added. If you want a different field name, change the name at line 26 and 83. You can also change the date formats to suit your needs here as well.
- Add a second blank page to the document. I was running into issues where the paste was appearing partially on the first. The blank page resolved this and I added code to remove the original page as well as the blank one from the beginning.
How to Use
Open up Word, then open up VBA, and copy and paste this snippet into a module. When you run the function, it’ll create a copy for every day of the month. I also created a function to start at a specific date in case you run it in the middle of the month.
Sub CreateSigninsForMonth() Dim N As Integer Dim sCurrentMonth, sCurrentYear As String Dim sNewDate As String N = 1 Count = Day(GetLastDayOfMonth) For CopyNumber = 1 To Count With Selection .GoTo wdGoToPage, wdGoToAbsolute, 1 .Bookmarks("\Page").Range.Copy .Paste End With With ActiveSheet sCurrentMonth = Format(Date, "mmmm") sCurrentYear = Format(Date, "yyyy") sNewDate = (CopyNumber & " " & sCurrentMonth & " " & sCurrentYear) ActiveDocument.FormFields("txtDate").Result = Format(sNewDate, "DDDD MMMM dd, YYYY") End With N = N + 1 Next CopyNumber 'Delete template + blank page For i = 1 To 2 With ActiveDocument strt = .GoTo(wdGoToPage, wdGoToLast).Start Set r = .Range(strt - 1, .Range.End) r.Delete End With Next End Sub Sub CreateSigninsForMonthStartingDate() Dim Count As Integer Dim N As Integer Dim sCurrentMonth, sCurrentYear As String Dim sNewDate, sEndDay As String N = 1 Count = 0 iStartDay = InputBox("Which day do you want to start on?", "Starting Day", "1") Count = InputBox("Which day do you want to end on?", "Ending Day", Day(GetLastDayOfMonth)) Do While Count > Day(GetLastDayOfMonth) sEndDay = InputBox("Which day do you want to end on?", "Ending Day", Day(GetLastDayOfMonth)) If iStartDay = vbNullString Or sEndDay = vbNullString Then MsgBox "You clicked cancel.", vbOKOnly, "Try again later!" Exit Sub End If If IsNumeric(CInt(sEndDay)) Then Count = CInt(sEndDay) End If Loop For CopyNumber = iStartDay To Count With Selection .GoTo wdGoToPage, wdGoToAbsolute, 1 .Bookmarks("\Page").Range.Copy .Paste End With With ActiveSheet sCurrentMonth = Format(Date, "mmmm") sCurrentYear = Format(Date, "yyyy") sNewDate = (CopyNumber & " " & sCurrentMonth & " " & sCurrentYear) ActiveDocument.FormFields("txtDate").Result = Format(sNewDate, "DDDD MMMM dd, YYYY") End With N = N + 1 Next CopyNumber 'Delete template + blank page For i = 1 To 2 With ActiveDocument strt = .GoTo(wdGoToPage, wdGoToLast).Start Set r = .Range(strt - 1, .Range.End) r.Delete End With Next End Sub Function GetFirstDayOfMonth(Optional dtmDate As Date = 0) As Date ' Return the first day in the specified month. If dtmDate = 0 Then ' Use the current date if none was specified dtmDate = Date End If GetFirstDayOfMonth = DateSerial(Year(dtmDate), Month(dtmDate), 1) End Function Function GetLastDayOfMonth(Optional dtmDate As Date = 0) As Date ' Return the last day in the specified month. If dtmDate = 0 Then ' Use the current date if none was specified dtmDate = Date End If GetLastDayOfMonth = DateSerial(Year(dtmDate), Month(dtmDate) + 1, 0) End Function
Years ago, when the internet was young, I purchased my first domain from InterNIC for about $70 a year. When the alternatives finally popped up, I switched my domain to Godaddy. As prices dropped, I ended up buying more domains through them for ideas and projects I developed. Eventually, I got tired of Godaddy’s shenanigans with pricing (increasing prices and having privacy as an add-on) and upselling everything, and after reviewing many technical forums, ended up switched everything to Namecheap where I even signed up for their shared hosting.
Things started well, but after a short while, the hosting ended up not being any better than Godaddy as all my sites became ridiculously slow as they oversold capacity. I finally made the jump to DigitalOcean, and the same site without any changes or optimizations jumped in performance about 60% based on the metrics reports I was running. No joke, people were pinging me asking what I changed to make my site run so much faster because they wanted to do the same.
I was annoyed that Namecheap refused to refund a partial credit on the hosting even though I wasn’t using it anymore despite their bait and switch caused the performance issues by overselling capacity. In the end, I didn’t fight, because it was cheap, I was beyond the credit card chargeback period, and I still had domains with them and didn’t like any of the other registrars out there enough (nor were the prices better) to make the switch.
On the domain side, their control panel was pretty confusing at first. Godaddy’s was more straight forward for managing DNS and records, so I had to constantly figure out what the Namecheap configuration equivalents were because it wasn’t straight forward. They updated their control panel, and things got easier, but it still wasn’t intuitive. I think a lot of my confusion came from them trying to default to their parking pages or their hosting.
I’d run into issues, and their support would always be good enough to help fix them. I then ran into an issue with setting up a txt record. The fix required them to manually enter the entry on their side due to a bug in their control panel that they still haven’t fixed to date. The bug was serious enough that if I made any changes to any of the domain settings (like adding another A/CNAME/MX record), it would undo their change and I’d have to file another ticket for them to complete the change.
All changes would take hours to propagate. Many of the services that validate DNS changes would not see them for hours or even a day. It hadn’t been a great experience, and now I’ve wanted to find a new registrar to replace Namecheap.
In 2018, CloudFlare announced their registrar service. I’ve patiently watched to see how reliable it is and finally ended up testing it last month with a domain for an existing project that’s in development. To say I was blown away was an understatement. The experience hasn’t been perfect, but the things that really matter are well executed.
I found Namecheap to be a bit deceptive in the transfer process. All other registrars I’ve used let you approve transfers instantly via a link in the notification email. Namecheap sent an email saying they received a request to transfer the domain and this is the text about approving/canceling:
I wanted to expedite the transfer only because I wanted to get the control panel cleared out to find out what’s left and what’s worth keeping and what domains I should let go. I reached out to their chat and learned that they implemented a dark pattern. I never clicked the link because I didn’t want to accidentally cancel an approved transfer as I had assumed it was a direct link to cancel.
It turns out that their support person confirmed the text is deceptive and the link opens a page lets you approve or cancel it:
Not cool Namecheap, but I digress. So let’s look at the pro’s and con’s of CloudFlare which elaborate why I’m switching to them.
- Price point is cheaper than everyone else since they sell wholesale.
- Transfers are painless.
- Control Panel is intuitive compared to both Namecheap and Godaddy’s. It allows for managing records quicker.
- DNS changes reflect instantly on all the third party domain verification services.
- Cloudflare offers free SSL on the domain level.
- Support responded in a few hours to an issue I had.
- Can’t register new domains directly at Cloudflare which requires purchasing a domain somewhere else and then waiting 60 days to transfer.
- Doesn’t support all the TLD domains yet.
- Can’t create custom nameservers without paying $200 a month, which was free with godaddy and namecheap.
Not bad for a cons list, so I migrated most of my domains over to Cloudflare, and it’s a win all the way around. They sell domains at cost, offer a ridiculously easy to use control panel where everything works. Even better are how quickly changes take effect, and third party services can validate the changes.
A few years ago, I purchased a DroboPro-FS Pro 8 bay NAS system. After getting a demo from one of my vendors, I was sold on the ability to have different sized hard drives, hot swapping, and the self repairing file system. Another feature I liked was that it also monitors the health of my hard drives. If one starts developing issues or gets full, Drobo will warn me, and robotically shift my data to other drives until I can replace that drive. I did my research and at the time, it looked like a fantastic deal.
After a few years with it, I can readily admit I made a huge mistake in purchasing the product. I’m lucky in that the unit has not failed yet like it has for others but it’s also a horrible choice for backups for a myriad of reasons.
- IT LOSES DATA: My Drobo mysteriously loses data after it reboots. I’ve noticed this a few times after I’ve come home from vacation and turned off the unit to save power. Files (entire gigs of them) I haven’t accessed in months are mysteriously gone. Because it’s a custom file system, I have no way of trying to recover anything. I’ve confirmed this actually happens by running a disk catalog before shutting down and then running a report on the compare which shows files are definitely missing. I’ve done this 4 times and have verified it happened 4 times now. Missing file range from a few to hundreds and they vary in size so there’s no discernable pattern.
- Vendor Lock-in: You can’t access any of the files without a Drobo due to the custom file system. If the unit fails, you lose everything unless you buy another unit.
- Terrible Support: Support is lackluster. The warranty is one year and then you’re on your own which is not exactly confidence building.
- Weird Custom Desktop App: The Drobo dashboard is terrible and should have been a web interface like the other NAS options on the market. The dashboard is an app that runs in the background to connect to the shares and manage permissions, which every OS can do natively. Even worse, the dashboard app supposedly connects to the Drobo through port 5000 and then broadcasts over a randomized port which requires custom firewall permissions if it doesn’t support app level permissions. There’s no reason for the app to exist let alone broadcast anything.
- Doesn’t Deliver on Core Features: One of the features of the Drobo that sold me on it was that it should notify you when a drive is failing. In reality, the Drobo did not actually notify me about failing drives until it was too late and the drive failed completely and it didn’t move any of the data to other drives.
- Apps: Non-existent. They released some basic app support by doing a weird workaround by creating a directory and adding the app files to it but then it suddenly disappeared from the Drobo website. A third party site came up to mirror the lost apps but at this point it’s not even worth the effort to me. Droboapps was an extremely limited offering and afterthought. They actually removed support for apps from the Drobo because they didn’t want to deal with supporting users.
- Remote File Access – Doesn’t exist natively.
Compared to Synology DS1918+
I purchased a Synology DS1918+ 8-Bay Drive and the differences are pretty stark. I’ve actually been moving important files from my Drobo over to it as a safeguard.
- No data loss: Thus far, no data has been lost on the Synology.
- Apps: The apps support on Synology is incredible. I use a bunch (Plex, Photos, Backups, and many others) but the one I love most is Synology Drive, which works as a replacement for DropBox.
- Web Interface: The Synology web interface is extremely intuitive and easy to work with. There’s options for power users to make the NAS even more useful. I love that there’s no custom app to run in the background on my desktop. Connecting to shares is as simple as using the network shares native to the OS.
- Permissions/Security: The Synology permissions can be extremely granular, and includes support for roles/groups + users.
- Sharing: Synology offers an easy way to share files over the internet with permissions, like Dropbox does. The only difference is you aren’t limited to file size restrictions. Drobo does not.
- Remote Connect: Synology offers a browser based remote connect solution they call Quick Connect. Once you configure your Synology for remote access (it even configured my firewall permissions), you can access it via the quickconnect URL. No jumping through hoops for setup or access, and it just works. Good luck with setting this up with the Drobo, as I haven’t figured it out.