Sunday, November 8, 2020

One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.

This is not message I wanted to see in my FreeNAS GUI when running brand new disks, but oh well.

First scrub on July 19 2020

I discovered the error on July 29 2020 when moving some 300 GB data to one of my mirrored pools. After the initial panic of "what is this?!" I logged in to my server over ssh and checked what has happened with zpool status.
zpoll status
pool: Tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 24.8M in 0 days 00:00:01 with 0 errors on Sun Jul 19 00:00:32 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        SafeHaven                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            gptid/blip_disk1.eli  ONLINE       0     0     3
            gptid/
blip_disk2.eli  ONLINE       0     0     0

errors: No known data errors


Well, well, well. It seems that there were a few MB of data missmatch between the members of the mirror. As these are new disks, I am not particularly paniced for the moment, especially after reading the link above and reflecting a bit on recent events.
 

The likely culprit


During a maintenance/upgrade about two weeks ago one of the drives "fell out" of the pool due to a loosely attached SATA power cable and therefore my pool became "DEGRADED" (another word that one does not see with great pleasure in the GUI...). Since this pool was at the time used for the system log, as well as for my jails, the remaining one disk was still carrying out read/write operations thereby getting out of sync with the other - at the time OFFLINE - drive. In the end I managed to get the disk back in the pool, however, I imagine that the changes that happened on the first disk were not mirrored automatically upon re-ataching the second disk. That July 19 midnight seems like a scrub, which must have caught the data missmatch and fixed it.

In this case it is probably not a huge issue. I cleared the error message by dismissing it in the GUI and from the terminal as well via,
sudo zpool clear Tank gptid/blip_disk1.eli
and will continue to monitor the situation.

Another scrub on Aug 9 2020

The scrub this time also caught some things, and zpool status gave the following.

pool: Tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 12K in 0 days 06:29:41 with 0 errors on Sun Aug  9 06:53:43 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        SafeHaven                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            gptid/blip_disk1.eli  ONLINE       0     0     3
            gptid/blip_disk2.eli  ONLINE       0     0     0

errors: No known data errors
 
Now, during a nighly scub, another 12K was discovered and repaired. This was again on the same disk as previously and I am still wondering if this is not some leftover of the previously described issue. Perhaps something that was not caught last time? According to Yikues, scrub repaired 172K it could be anything or nothing since I am running server-grade hardware with ECC memory.  Either way, out of precaution I am doing the following:
  • create a snapshot,
  • refresh my backup,
  • schedule a long SMART test and
  • (if time allows) run a memtest.

Note: I know that some just love recommending running a memtest. However, looking at the issue, statistically, it is extremely unlikely that it is a memory issue as proper memory - which server-grade memory is  - should pass qality checks after manufacturing and they really rarely go bad.

If the SMART tests will be passed, I will call it a day and keep observing the system. If the SMART test throws back some errors or if the error happens another time on the same drive, I will contact the retailer as the drive is well withing garantee.

Drive S.M.A.R.T. status 

Checking the drive SMART status with

 sudo smartctl -a /dev/ada1 
revealed no apparent errors with the disk. SMART tests previously all completed without errors.

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      1404         -
# 2  Extended offline    Completed without error       00%      1329         -
# 3  Short offline       Completed without error       00%      1164         -
# 4  Short offline       Completed without error       00%       996         -
# 5  Short offline       Completed without error       00%       832         -
# 6  Short offline       Completed without error       00%       664         -
# 7  Short offline       Completed without error       00%       433         -
# 8  Short offline       Completed without error       00%       265         -
# 9  Extended offline    Completed without error       00%       190         -
#10  Extended offline    Completed without error       00%        18         -
#11  Short offline       Completed without error       00%         0         -

Memtest

Came back clean. I am not particularly surprised here.

Status on 08 November 2020

A few months have passed since I started writing this post. In the meanwhile I was monitoring the situation and did not discover any further issues. The pool is running fine and no further scrubs reported any errors. I am therefore concluding that the issue was caused most likely by the above malfunction and has nothing to do with the drive itself.


Friday, July 24, 2020

Syncthing jail on FreeNAS: "RuntimeError: mount_nullfs:"


Reason

This error arises when one tries to first stop a jail, add new mount point and then try to start the jail back up. This then presents an error message and the jail will not start up. I presume in my FreeNAS-11.2-U8 the underlying issue is that adding a new mount point does not automatically create the destination directory, hence the error message.
Runtime error message when trying to start up the jail upon folloing incorrect steps to set up a new Syncthing sahre.
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 166, in call_method
    result = await self.middleware.call_method(self, message)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1093, in call_method
    return await self._call(message['method'], serviceobj, methodobj, params, app=app, io_thread=False)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1037, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1058, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 990, in run_in_proc
    return await async_run_in_executor(loop, executor, method, *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/middlewared/utils/asyncio_.py", line 41, in async_run_in_executor
    raise result
RuntimeError: mount_nullfs: /mnt/POOL/iocage/jails/Syncthing/root/media/Sync: Resource deadlock avoided
jail: /sbin/mount -t nullfs -o rw /mnt/POOL/Syncthing /mnt/POOL/iocage/jails/Syncthing/root/media/Sync: failed

Solution

It follows then that the correct way to add a new share to Syncthing in an iocage jail is to first create the directory inside the jail itself where the dataset will be mounted to. This is the crucial step. On my system, by default, this was somewhere in 
/mnt/POOL/iocage/jail/Syncthing/root/media/Sync/SHARE-NAME
however, this may vary for you. So log in via ssh to you FreeNAS server, cd to the Syncthing jail and mkdir a new directory. After this, the jail can be stopped and a new moint point added where the source is the storage pool and the destination is the previously created new directory. Starting the jail afterwards will work just fine and the rest can be done from the Syncthing GUI.

If the above issue is already present, remove the "falsely added" mount point in the FreeNAS GUI and start again.

Thursday, June 14, 2018

Screen tearing in Ubuntu with xfce4 using intel HD4000 graphics

Recently I switched back from Unity desktop environment to xfce4 and I noticed a screen tearing when watching movies and even slightly when scrolling through websites. Needless to say, it is quite annoying and it should not be happening.

My laptop has a decent Intel core i5 CPU with Intel HD4000 graphics, which should be more than capable of playing back movies perfectly, let alone dispay websites. Hence the newly discovered screen tearing must be a side-effect of switching to xfce4. Initially I thought it was caused by some new driver that got installed or rather replaced during the switch, but there were no signs.

lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
dpkg -l | grep intel
ii  xserver-xorg-video-intel                                    2:2.99.917+git20160325-1ubuntu1.2                 amd64        X.Org X server -- Intel i8xx, i9xx display driver

Solution

After scouring the internet for possible causes such as malconfigured xorg config or wrong drivers and trying to fix these, I saw vertical sync mentioned somewhere and enabling it might help. In xfce4 this feature is found in Settings/Window Manager Tweaks/Compositor/Synchronize drawing to the vertical blank, see in the screenshot below.

Synchronize drawing to the vertical sync
Window manager Tweaks
Make sure to tick the selected "Synchronize drawing to the vertical blank" tickbox and the tearing should be gone.

Tuesday, June 12, 2018

Set date and time in raspbian

Problem

My new installation of raspbian stretch had its time off. The easiest way of setting this right I found was using timedatectl.

timedatectl
      Local time: Tue 2018-06-12 18:27:41 UTC
  Universal time: Tue 2018-06-12 18:27:41 UTC
        RTC time: n/a
       Time zone: Etc/UTC (UTC, +0000)

Network time on: yes
NTP synchronized: yes

RTC in local TZ: no

The date was correct but the actual time was off. Checking the output of timedatectl, the time zone was here off, so that was the obvious problem.

Solution

Checking the help of timedatectl quickly led to a resolution.

timedatectl -h
timedatectl [OPTIONS...] COMMAND ...

Query or change system time and date settings.

  -h --help                Show this help message
     --version             Show package version
     --no-pager            Do not pipe output into a pager
     --no-ask-password     Do not prompt for password
  -H --host=[USER@]HOST    Operate on remote host
  -M --machine=CONTAINER   Operate on local container
     --adjust-system-clock Adjust system clock when changing local RTC mode

Commands:
  status                   Show current time settings
  set-time TIME            Set system time
  set-timezone ZONE        Set system time zone
  list-timezones           Show known time zones
  set-local-rtc BOOL       Control whether RTC is in local time
  set-ntp BOOL             Enable or disable network time synchronization

  1. List the available timezones that can be set with,
    timedatectl list-timezones
    and scroll through the list to find the correct one.
  2. Set your current timezone with,
    timedatectl set-timezone Zone/City
No reboot is required, changes take effect right away.

Thursday, May 10, 2018

Install Gridcoinresearch wallet on the Raspberry Pi 3B

I previously wrote about how to get Gridcoinresearch running under Ubuntu. Since then, the project has advanced and in most cases you can simply install the wallet from a PPA.
I came to realize that this was not the case with my Raspberyy Pi 3 B, running Rasbian stretch. When trying to add a PPA as recommended by Launchpad Gridcoin-stable:
Stable builds for ordinary users for i386, amd64 and armhf. This PPA will lag leisure releases by up to one day to ensure stability. Mandatory upgrades will be released immediately.
The following happens.

sudo add-apt-repository ppa:gridcoin/gridcoin-stableTraceback (most recent call last):  File "/usr/bin/add-apt-repository", line 95, in <module>    sp = SoftwareProperties(options=options)  File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 109, in __init__    self.reload_sourceslist()  File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 599, in reload_sourceslist    self.distro.get_sources(self.sourceslist)  File "/usr/lib/python3/dist-packages/aptsources/distro.py", line 89, in get_sources    (self.id, self.codename))aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Raspbian/stretch

Hence, lets compile from source.

Update 12/06/2018

The latest release of Gridcoin switched to using autogen instead of the old make -f makefile process. The install and update process is less tedious this was, but it takes quite long on a Raspberry Pi 3B. The general method for building and installing it is,
./autogen.sh 
./configure
make
make install
However, to succeed on Raspbian, some packages need to be installed first. Proceed as follows.
 

1.

BerkleyDB4.8  needs to be installed, exactly! So basically what is available from apt (libdb5.x) is not good. I found a good resource for installing this from source at askubuntu.com. Below is the solution that worked on my Raspberry Pi.
wget 'http://download.oracle.com/berkeley-db/db-4.8.30.NC.tar.gz'tar -xzvf db-4.8.30.NC.tar.gz
cd db-4.8.30.NC/build_unix/

../dist/configure --enable-cxx

make
make install

And then tell the system where to find db4.8
export BDB_INCLUDE_PATH="/usr/local/BerkeleyDB.4.8/include"
export BDB_LIB_PATH="/usr/local/BerkeleyDB.4.8/lib"
ln -s /usr/local/BerkeleyDB.4.8/lib/libdb-4.8.so /usr/lib/libdb-4.8.so

2.

After this the building can begin. The first few steps are simple,
sudo apt-get install autoconf
git clone https://github.com/gridcoin-community/Gridcoin-Research
cd Gridcoin-Research./autogen.sh
When trying to run ./configure I got into trouble with BerkleyDB4.8 still not being detected or perhaps superceeded by a higher version. The solution here was to set the environment variable and link CPPFLAGS and LDFLAGS before running ./configure as,
env CPPFLAGS='-I/usr/local/BerkeleyDB.4.8/include' LDFLAGS='-L/usr/local/BerkeleyDB.4.8/lib' ./configure
Finally,
make

Note: As usual, you can run make -j4 to run it on 4 threads, however I had a lot of system hungs with my Raspberry Pi at this point. Maybe the RAM ran out, I am not certain. See if it works for you, but I would recommend you set up some temporary swap as descibed below in "Setting up additional Swap".


--- UPDATE END ---

TL;DR

For those who understand commands and want to get things running qucikly.

sudo apt-get install ntp git build-essential libssl-dev libdb-dev libdb++-dev libqrencode-dev libcurl4-openssl-dev libzip4 libzip-dev libboost1.62-all-dev libminiupnpc-dev
sudo dd if=/dev/zero of=/mnt/2GB.swap bs=1024 count=2097152
sudo mkswap /mnt/2GB.swap
sudo swapon /mnt/2GB.swap
git clone https://github.com/gridcoin/Gridcoin-Research
cd ~/Gridcoin-Research/src
chmod 755 leveldb/build_detect_platform
make -f makefile.unix -j 2 USE_UPNP=- -e PIE=1
strip gridcoinresearchd
sudo cp gridcoinresearchd /usr/bin/gridcoinresearchd
sudo swapoff -a

Installation

Although, installing the client is described on the Github page [1] of the project and following it step-by-step will result, in most cases, in a perfect installation, for the raspberry pi it can get tricky.

Install dependencies

Nothing special here, just the dependencies. First do a 
sudo apt-get update && sudo apt-get upgrade
and then
sudo apt-get install ntp git build-essential libssl-dev libdb-dev libdb++-dev libqrencode-dev libcurl4-openssl-dev libzip4 libzip-dev libboost1.62-all-dev libminiupnpc-dev

Setting up additional Swap

I ran into issues with memory during compiling, especially with multiple threads. You can try to skip this step and see if it works, otherwise lets create a 2 GiB swap file. The commands take about 2-4 minutes to finish, so be patient.

sudo dd if=/dev/zero of=/mnt/2GB.swap bs=1024 count=2097152
sudo mkswap /mnt/2GB.swap 
sudo swapon /mnt/2GB.swap

This will create a 2GiB /mnt/2GB.swap file that will be used during compiling.

Clone and compile gridcoinresearchd from Github

Everything is ready to pull gridcoinresearchd from Github and proceed with building it.
git clone https://github.com/gridcoin/Gridcoin-Research
cd ~/Gridcoin-Research/src
chmod 755 leveldb/build_detect_platform 
make -f makefile.unix -j 2 USE_UPNP=- -e PIE=1
strip gridcoinresearchd
sudo cp gridcoinresearchd /usr/bin/gridcoinresearchd
To start gridcoinresearchd type
gridcoinresearchd 

Note: The compiling (make) takes a while. Be patient.

Tip: "gridcoinresearchd" is a system command now and can be called quickly by starting to type it "grid..." and pressing [TAB]. Also, type gridcoinresearchd help to get a list of available commands with the headless client. The -j 2 option tells make to use 2 CPU cores, this helps speeding up the compiling. Some general practice says that it is fine to use number of CPU x 1.5 to get a good parallel performance, however when using 4-6 threads I ran out of memory and got errors.

Turn off swap

Swap is no longer needed after compiling, so it can be turned off. This helps prolonging the SD card's lifespan by preventing writing to it too often.
sudo swapoff -a

Troubleshooting

If any dependencies are missing (or, of course, something else is wrong) you will be greeted by an army of error messages. For example,
gridcoin upgrader.h:4:42: fatal error: curl/curl.h: No such file or directory  #include <curl/curl.h> // for downloading
means that there is an issue with curl. It took me a while, but it was not actually curl, but libcurl that has been missing. After installing it with sudo apt-get install libcurl4-openssl-dev, the compiling got past this point.

Or something like this
gridcoin compilation terminated. makefile.unix:137: recipe for target 'obj/rpcrawtransaction.o' failed make: *** [obj/rpcrawtransaction.o] Error 1
Here if I remember the issue was a missing boost library. sudo apt-get install libboost1.62-all-dev fixed the problem in this case.

Of course, these are just two examples, but the rule applies. Check the last few lines of the error message, and see what is missing. You can check whether a package is installed with dpkg -l | grep <PACKAGENAME>.

Of course, if you get stuck somewhere you can feel free to leave a comment below.

Sources
[1] - https://github.com/gridcoin/Gridcoin-master/blob/master/CompilingGridcoinOnLinux.txt 

Sunday, October 1, 2017

Windows Share Permissions Done Right in FreeNAS

The scenario is that you have a FreeNAS machine - for reference I am running FreeNAS-11.0-U2 - and you want to have a multi-user system where different users have different permissions to access shares over a local network. Here I will show a quick and basic setup of a new share and setting its permissions. Then I will explain two common issues that are encountered and how to resolve them:
  • Everybody can see and read the shares over the Windows network
  • I have set up the users, added them to the proper group, but they still cannot access a Dataset

To begin with, the basic steps for creating a new network share is as follows:
  1. Create and manage users and groups
  2. Create and share the Datasets

Create and Manage Users and Groups

It is probably easier to start with this. So for example we want to have 3 users, Alice, Bob and Charlie. They should all be granted access to some common shared directories and have restricted access to some other directories.
  1. Create a Group called "Shared". This group will be the owner of later directories (Datasets) accessible to all of the users.
    Creating a new group called Shared.
  2. Create the users Alice, Bob and Charlie and add them to the group Shared.
    Creating a new user and adding immediately to the Shared group
    Adding the new user Alice, at the same time assigning her to the Shared group.

Create and Share the Datasets

There are plenty of guides on this and it is not so complicated once you get the hang of it. For reference take a look at doc.freenas.org, forums.freenas.org or tekblog. Here just for the sake of introduction the basic idea.
  1. Create a new Dataset called "Common" as a Windows share.
    Creating a new Dataset called Common.
  2. Change the permissions of the newly created Dataset and set the Owner (user) as root and the Owner (group) the Shared group.
    Changing the permissions of the new Datatset.
  3. Share the newly create Dataset. This makes it available over the network.
    Creating a new Windows (SMB) share
    Creating a new SMB (Windows) sahre for the newly created Dataset
 At this point all 3 users have access to the Common share over the network, by default \\freenas.local\Common. This is the basic setup and it will work on freshly created datasets. If you have previosuly changed any permissions on parent Datasets the read the section below for explaining the issues.

General Errors and Solution 

A brief section explaining some (trivial) problems I encountered and found it hard to get an explanation.

Everybody can see and read the share over the network

By default when making Windows shares in FreeNAS the group "Everyone" is added to a share and hence all users who can log in can actually view the share. The solution is to attach the volume in a Windows amchine as the owner of the dataset, right-click the folder, go to permissions and remove the group "Everybody" from the access list. This prevents LAN users from seeing the sahres all together.
Checking user and group permissions for the main Dataset
By default, the group "Everyone" is added to FreeNAS Windows shares.

Permissions settings for the main Dataset
To deny access of local network users without explicit permissions to view the shared Datasets, remove the "Everyone" group from the permissions tab.
If you have sub-folders in the datase, you will get a prompt asking you if you want to change the permissions recursively, you can say yes.

I have set up the users, added them to the proper group, but they still cannot access a Dataset

This can happen if a parent Dataset is shared and some of its sub-datasets are also shared separately. The issue comes when the sub-dataset has to be shared with an user, but the parent dataset has to be restricted. It took me a while to figure out - as it is often not mentioned - but the parent dataset in FreeNAS has to have the same Owner (group) as the sub-dataset you want to share. Lets look at the following simple share setup as an example.
Storage Dataset with Music and Series sub-datasets.
Example share setup, where Storage Dataset has 2 sub-datatsets.


If I wanted to share just the Music sub-dataset with Alice, I would need to do the following,
  • Create a new group, e.g. called "Shared"
  • Add Alice to the group Shared
  • Make "Shared" the Owner (group) of the Music dataset
  • Make "Shared" the Owner (group) of the Storage parent Dataset (this is usually forgotten!)
  • To restrict Alice's access to the Series dataset, make sure that it is owned by another group in which Alice is not a member.

Run Storjshare in a FreeNAS Jail

Not really a Debian/Ubuntu thing as per say, but since recently I built a FreeNAS system, I though it would be useful to rent out an unused 2 TB disk. So here it goes, Storjshare daemon inside a FreeNAS-11.0-U2 jail.

I am assuming that you know what storjshare is, basic experience with its terminal (non-GUI) version and that you have hands-on experince with FreeNAS.

Update (2018 Jan 9):

I ran into a range of errors during an update. Here are my observations and the solutions I found.
  1. Installing via nvm did not work for me, instead I manually installed the dependencies,
    pkg install npm
    npm install -g npm3

    Currently this would install the following node and npm versions,
    node v9.3.0
    npm3 3.10.10
    npm 5.3.0

    Since npm install the latest version of node automatically, manual installation of node is not necessary.
  2. Updating storjshare after thisworks using,
    npm3 install storjshare-daemon --global --no-optional
    (Yes, that is npm3 and not npm. I seem to have run into an infinite number of troubles with that.)
  3. Permission errors with npm can be fixed by changing npm's default directory
  4. Currently running,
    storjshare --version
    daemon: 5.3.0, core: 8.5.0, protocol: 1.2.0
If any packages are reported missing when installing storjshare-daemon via npm3, remember to install them by,
npm3 install -g <package>
instead of,
npm install -g <package>

TL;DR (aka Advanced users) 

  1. Create jail and assign storage space
  2. In jail terminal
    pkg install npm git
  3. Then install npm3 with,
    npm install -g npm3
  4. Install storjsahre via npm,
    npm3 install storjshare-daemon --global --no-optional
  5. Start the daemon and connect a farmer node
    storjshare-daemon
    storjshare start --config yourconfig.jso
    n
Not clear enough? Read below.

Create a Jail and add some space

  1. Go to Jails/Add Jail. No fancy setting required, probably name it something useful like Storjshare
    Adding a new jail in FreeNAS
    Add a new jail "Storj" where the service will run.
  2. Assign storage place to the jail. Go to Jail/Storage/Add Storage. Select the source, aka the drive or directory to store the future files and the destination. The destionation could be e.g. /mnt/Storjshare and you can ask to create the new directory.
Creating a new jail in the UI
Adding storage space to an existing jail.

Allocating storage space from a Dataset to the newly created jail
Assigning the source (drive space) of Drive1/Storjshare to the jail's /mnt/Storjshare mount point.

 The jail is ready and set, proceed to the next step.

Installing storjshare

Now I did not follow the standard instructions as installing node the described way did not seem to work. Instead I manually installed the required node version via pkg. We need the LTS version 6 of node and we can check for this. You can either log in via ssh to the jail or simply launch a terminal from the UI on the Jails tab.
UI snippet showing how to start a shell from the web browser
Conveniently launching a terminal from the UI.
Once the terminal is open, lets install the pre-requisites first, followed by storjshare.
  1. Search for availabel node versions via,
    pkg search node
    pkg search node output in the shell
    pkg search node returns a list of available packages, notice the node6-6.11.3-1.
  2. The node version we need is node6-6.11.3-1 as shown above. This can be installed with,
    pkg install node6-6.11.3-1
    Installing node6 LTS using pkg install
    Installing node6 with pkg.
  3. At the end you will be prompted to isntall npm3, so do,
    pkg install npm
    Installing npm via pkg install
    Installing npm3 after node6.
    Since npm3 can no longer be found directly through PKG, to install it do,
    npm install -g npm3
  4.  These should be completed so install the other required packages as well,
    pkg install git
  5.  Start installing storjshare as per the githug guide,
    npm3 install storjshare-daemon
    --global --no-optional
    A few warnings will be present, but for all functionality it will work.
Note: Above the --no-optional was added to the install command as a suggestion from github as the dtrace package fails to build on FreeBSD at the moment. Since the package is not necessary for storjshare, to avoid annoying - and non-relevant - error messages, this modeule can be ommited. When building without the additional --no-optional a similar error will be thrown, although storjshare would still run:
Error: Cannot find module './build/Release/DTraceProviderBindings'

Running storjshare

This is somewhat beyond the scope of the guide, however here is a quick guide on setting up a simple storjshare farming node.
  1. Create your config file with the help of storjsahre --help
    Usage: storjshare-create [options]

    generates a new share configuration

    Options:

    -h, --help                 output usage information
    --storj <addr>             specify the STORJ address (required)
    --key <privkey>            specify the private key
    --storage <path>           specify the storage path
    --size <maxsize>           specify share size (ex: 10GB, 1TB)
    --rpcport <port>           specify the rpc port number
    --rpcaddress <addr>        specify the rpc address
    --maxtunnels <tunnels>     specify the max tunnels
    --tunnelportmin <port>     specify min gateway port
    --tunnelportmax <port>     specify max gateway port
    --manualforwarding         do not use nat traversal strategies
    --logdir <path>            specify the log directory
    --noedit                   do not open generated config in editor
    -o, --outfile <writepath>  write config to path
    For example,
    storjsahre-create --key myPayoutAddress --storage /mnt/StorjShare --size 2TB --logdir /root/ -o settings.json 
  2. After the config file was created, start the daemon with,
    storjshare daemon
  3. Finally, start the farming node using the previos settings,
    storjshare start --config settings.json
     
storjshare status output from the FreeNAS jail
Storjshare inside a FreeNAS jail, runign without problems.

Note: Specifying a logfile can be necessary. During my tryouts I have encountered some trouble with the log directory not being accessible by the jail's user.

Happy farming!