Saturday, December 26, 2020

E-Mail notifications in FreeNAS using Gmail

Setting up email alerts is a smart way of making sure nothing unexpected happens on a FreeNAS server without knowing about it or having to check the UI every now and then. A "should be straigthforward" way ough to be setting up a gmal account as the outgoing email. Nevertheless, it is not as simple as clicking "yeah, sure, email me".

Objective

Get a working E-Mail notification from FreeNAS by using a Gmail E-Mail address for sending the messages.

Overview

FreeNAS needs an smtp server and corresponding E-Mail account credentials to send E-Mails. E-Mails cannot be sent directly from within FreeNAS without an E-Mail server. Additionally, using a Gmail address with the corresponding account password does not work either.  The steps to get such a setup running are briefly as follows,

  1. Get a Gmail E-Mail address,
  2. Allow "App password" access to the E-Mail account, and
  3. Set up FreeNAS E-Mail service.

Settings in the Google account

This is the first step in configuring sending E-Mails through a gmail account. Basically, a new App passsword is required for FreeNAS. Connecting and sending emails via the username and password merely did not work in my case, not even when enabling "Allow less secureapps". So to properly set up access the following needs to be done,

  1. Log in to the Google account.
  2. Go to Security.

    Google account securities tab

  3. Enable 2 Factor Authentication (this is required for step 4!).

    Enable two-step verification in the securities tab.

  4. Once 2 Factor Authentication is enabled, an option becomes avilable to create App passwords.

    App passwords option becomes available afect enabling 2FA
  5. Create a new App password and copy it to the clipboard.
    Generate a new App password for FreeNAS and copy it to the clipboard.

Once this is done, the settings from Gmail side are complete.

Settings in FreeNAS

  1. Go to System/Email and adjust the settings according to the details below.

    FreeNAS System/Email settings showing the fields to make it work with Gmail
    From E-Mail: ChooseItToYourLiking
    Outgoing Mail Server: smtp.gmail.com
    Mail Server Port: 465
    Security: SSL (Implicit TLS)
    SMTP Authentication: Ticked
    Username: example@gmail.com
    Password: YourAppPassword

Once this is done, a test E-Mail can be sent by clicking "SEND EMAIL". If all above steps were followed, a test notification wil arrive in the E-Mail box associated with the root user (!). Following this, an alert service can be set up as well. Without too much details,

  1. Go to System/Alert Services.
  2. Click "Add" and choose Type/Email.
  3. Fill out your email address and click "SEND TEST ALERT".

This should be working now as well and the type of alerts cn be configured based on user needs.

Monday, December 21, 2020

FreeNAS home server backups Done Right

There are plenty of guides explaining how to back up FreeNAS - now TrueNAS - servers. Personally, I found them either vague in what they actually back up and what will be the final outcome or they get way too complicated for people of home NAS servers who want something that "just works". So here it is how I am periodically backing up my FreeNAS server to USB external drives.

Expected Result

 

Inspect the following example layout of a Source_Pool and an empty Destination_Pool. I will adhere to this naming in the rest of the post. Also, make sure you understand every command before following the steps. Working with root access is always risky and data can be lost, previous backups mistakenly overwritten.


Source_Pool        ==>This is the main pool we want to backup.
    ---Dataset1        ==> This is a dataset within the pool.
    ---Dataset2
        ---Sub-dataset
    ---Dataset3

Destination_Pool                 ==>This is the destination pool we want to backup TO.
    ---Backup_Source_Pool     ==> This is the top-level dataset that will be created of the source pool.
        ---Dataset1
        ---Dataset2
            ---Sub-dataset
        ---Dataset3

Note how the original datasets of the Source_Pool all appear under Destination_Pool/Backup_Source_Pool. This means that another pool (say Source_Pool#2, not shown in the example above) may also be backed up to the Destination_Pool, e.g. to a dataset called Destination_Pool/Backup_Source_Pool#2. This would then also contain the full dataset and child dataset layout of this Source_Pool#2. Provided that the Destination_Pool has enough capacity, several pool dataset may be backed up to for example, a single USB drive.

Considerations

 
  • Data deliberately deleted over time from Source_Pool should also be deleted on the Destination_Pool inside the backups. I do not want to hoard data that I deleted for a good reason.
  • The snapshot used for backing up data must not be deleted until a newer backup was made using another, newer snapshot. In other words, always keep the latest snapshot of the system. Deleting all snapshots will require a full backup of the entire pool.
  • Permissions and ACLs are retained.

If in any doubt, check man zfs.

Overview

The general idea is to create an initial snapshot of the Source_Pool and use zfs send | zfs receive to send it over to the Destination_Pool into an existing dataset named Backup_Source_Pool. Once the initial backup is done, future backups can be done as incremental, which requires two snapshots to exist, where the backup will transfer only the changes between the two snapshots. In simple terms this means that at the very least two snapshots must exist on the Source_Pool to use incremental backups.
  1. Identify source and destination datasets where backups should be made from and to.
  2. Create a snapshot of the source dataset.
  3. Send and receive dataset stream using zfs send | zfs receive to destination pool.
  4. For future backups, send incremental backups using the same method.

Initial Backup

  1. Create an initial snapshot of Source_Pool. I prefer doing this through the UI using a recursive snapshot. (Recursive means that all Datasets within Source_Pool will also be snapshotted, otherwise only data directly in the main Dataset directory will be snapshotted! Short: if you want the entire Pool, use recursive.)
    Note: I highly recommend using and sticking to a naming style for snapshots. I am using,
    Source_Pool@BACKUP-20200814
    and will adhere to this.
  2. Create a dataset called Backup_Source_Pool under the Destination_Pool. I prefer doing this from the UI as well.
  3. Use ssh to log in to the FreeNAS server and verify the snapshot is there,
    zfs list -t snapshot

    if thre are too many snapshots, search for the correct one, e.g.

    zfs list -t snapshot | grep @BACKUP-20200814

  4. Assuming both Source_Pool and Destination_Pool are present and mounted in the system, proceed by making the initial backup. Note: this requires root access, so
    sudo -i

    (sudo zfs send ... Does not work!) 

    zfs send -Rv Source_Pool@BACKUP-20200814 | zfs receive -Fdu Destination_Pool/Backup_Source_Pool
A bit of explanation from man zfs,
R --  Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved.
v --  Print verbose information about the stream package generated.
F -- Force a rollback of the filesystem to the most recent snapshot before performing the receive operation.
d -- Use the name of the sent snapshot to determine the name of the new snapshot as described in the paragraph above. See man zfs for more accurate info.
u --  Newly created file system is not mounted. (Translation: when inspecting from the UI, the copied dataset will be visible, however, going over ssh manually to /mnt/Backup_Source_Pool will show up empty. No need for panic, the data is there, its is merely not mounted. I usually cannot mount a single directory using zfs mount Backup_Pool - due to I suspect some shares being mounted or similar - but zfs mount -a works just fine and the data "shows up".
 

Incremental Backup

 
This will only work if,
  1. you already have an initial backup, and
  2. you still have the snapshot of that backup (including the snapshots of all Sub-datasets if it was a recursive snapshot). 
Otherwise you will get an error. Unfortunately, this means that if the first Snapshot is no longer available, an incremental backup cannot be made and a new, initial backup has to be made, transferring all the data again. Otherwise, since only the change between the latest and previous snapshot is transferred, the backup speed is greatly increased.

Note that the following code will transfer only the differences between two Snapshots and data that was removed from Source_Pool will also be deleted on Destination_Pool during the backup. To make an incremental backup, the -i option is used followed by the old and new snapshot names after one another.
  1. Create a new recursive snapshot, this will be,
    Source_Pool@BACKUP-20200815
  2. Use ssh to log in to the FreeNAS server and check the Snapshots available
    zfs list -t snapshot | grep Source_Pool@BACKUP
    Hopefully there will be,
    Source_Pool@BACKUP-20200814

    Source_Pool@BACKUP-20200815

  3. Assuming both Source_Pool and Destination_Pool are present and mounted in the system, proceed by making the incremental backup. Note: this requires root access, so

    sudo -i
    (sudo zfs send ... Does not work!)
    zfs send -Rv -i Source_Pool@BACKUP-20200814 Source_Pool@BACKUP-20200815 | zfs receive -Fdu Destination_Pool/Backup_Source_Pool

Note that here the -i argument is added which stands for "incremental backup", man zfs. This requires two distinct snapshots to exists on the Source_Pool and the older of the two snapshots to exist on the Destination_Pool.
i -- Generate an incremental stream from snapshot1 to snapshot2. The incremental source snapshot1 can be specified as the last component of the snapshot name (for example, the part after the "@"), and it is assumed to be from the same file system as snapshot2.

Similarly to the initial backup, the Destination_Pool is unmounted when done.
 

Snapshot keeping strategy

Since for the incremental backups at least two Snapshots are needed, I always keep a minimum of two recursive snapshots of my system. This is a balance between storage space and ability to create backups or roll back to previous snapshots.

Technically between backups one can just keep a single Snapshot on the Source_Pool. Then, when backup day arrives, make a new Snapshot, do an incremental backup between the two Snapshots - the older of which exists on the Destination_Pool - and thendelete again the older snapshot of the two from Source_Pool.

In other words, for incremental backups to work, a Snapshot must not be deleted until another backup was made since that snapshot.

Reference

 
Not really necessary perhaps, but for simplicity here are some excerpts from man zfs relating to zfs send and zfs receive.
zfs send [-DvRp] [-[iI] snapshot] snapshot
Creates a stream representation of the second snapshot, which is written to standard output. The output can be redirected to a file or to a different system (for example, using ssh(1). By default, a full stream is generated.

-D

Perform dedup processing on the stream. Deduplicated streams cannot be received on systems that do not support the stream deduplication feature.
-i snapshot
Generate an incremental stream from the first snapshot to the second snapshot. The incremental source (the first snapshot) can be specified as the last component of the snapshot name (for example, the part after the @), and it is assumed to be from the same file system as the second snapshot.

If the destination is a clone, the source may be the origin snapshot, which must be fully specified (for example, pool/fs@origin, not just @origin).

-I snapshot
Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. For example, -I @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The incremental source snapshot may be specified as with the -i option.
-R
Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved.

If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed.

-p
Send properties.
-v
Print verbose information about the stream package generated.
The format of the stream is committed. You will be able to receive your streams on future versions of ZFS
 
zfs receive [-vnFu] filesystem|volume|snapshot

zfs receive
[-vnFu] [-d | -e] filesystem
Creates a snapshot whose contents are as specified in the stream provided on standard input. If a full stream is received, then a new file system is created as well. Streams are created using the zfs send subcommand, which by default creates a full stream. zfs recv can be used as an alias for zfs receive.

If an incremental stream is received, then the destination file system must already exist, and its most recent snapshot must match the incremental stream's source. For zvols, the destination device link is destroyed and recreated, which means the zvol cannot be accessed during the receive operation.

When a snapshot replication package stream that is generated by using the zfs send -R command is received, any snapshots that do not exist on the sending location are destroyed by using the zfs destroy -d command.

The name of the snapshot (and file system, if a full stream is received) that this subcommand creates depends on the argument type and the -d or -e option.

If the argument is a snapshot name, the specified snapshot is created. If the argument is a file system or volume name, a snapshot with the same name as the sent snapshot is created within the specified filesystem or volume. If the -d or -e option is specified, the snapshot name is determined by appending the sent snapshot's name to the specified filesystem. If the -d option is specified, all but the pool name of the sent snapshot path is appended (for example, b/c@1 appended from sent snapshot a/b/c@1), and if the -e option is specified, only the tail of the sent snapshot path is appended (for example, c@1 appended from sent snapshot a/b/c@1). In the case of -d, any file systems needed to replicate the path of the sent snapshot are created within the specified file system.

-d

Use all but the first element of the sent snapshot path (all but the pool name) to determine the name of the new snapshot as described in the paragraph above.
-e
Use the last element of the sent snapshot path to determine the name of the new snapshot as described in the paragraph above.
-u
File system that is associated with the received stream is not mounted.
-v
Print verbose information about the stream and the time required to perform the receive operation.
-n
Do not actually receive the stream. This can be useful in conjunction with the -v option to verify the name the receive operation would use.
-F
Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by zfs send -R -[iI]), destroy snapshots and file systems that do not exist on the sending side.

Sunday, November 8, 2020

One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.

This is not message I wanted to see in my FreeNAS GUI when running brand new disks, but oh well.

First scrub on July 19 2020

I discovered the error on July 29 2020 when moving some 300 GB data to one of my mirrored pools. After the initial panic of "what is this?!" I logged in to my server over ssh and checked what has happened with zpool status.
zpoll status
pool: Tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 24.8M in 0 days 00:00:01 with 0 errors on Sun Jul 19 00:00:32 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        SafeHaven                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            gptid/blip_disk1.eli  ONLINE       0     0     3
            gptid/
blip_disk2.eli  ONLINE       0     0     0

errors: No known data errors


Well, well, well. It seems that there were a few MB of data missmatch between the members of the mirror. As these are new disks, I am not particularly paniced for the moment, especially after reading the link above and reflecting a bit on recent events.
 

The likely culprit


During a maintenance/upgrade about two weeks ago one of the drives "fell out" of the pool due to a loosely attached SATA power cable and therefore my pool became "DEGRADED" (another word that one does not see with great pleasure in the GUI...). Since this pool was at the time used for the system log, as well as for my jails, the remaining one disk was still carrying out read/write operations thereby getting out of sync with the other - at the time OFFLINE - drive. In the end I managed to get the disk back in the pool, however, I imagine that the changes that happened on the first disk were not mirrored automatically upon re-ataching the second disk. That July 19 midnight seems like a scrub, which must have caught the data missmatch and fixed it.

In this case it is probably not a huge issue. I cleared the error message by dismissing it in the GUI and from the terminal as well via,
sudo zpool clear Tank gptid/blip_disk1.eli
and will continue to monitor the situation.

Another scrub on Aug 9 2020

The scrub this time also caught some things, and zpool status gave the following.

pool: Tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 12K in 0 days 06:29:41 with 0 errors on Sun Aug  9 06:53:43 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        SafeHaven                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            gptid/blip_disk1.eli  ONLINE       0     0     3
            gptid/blip_disk2.eli  ONLINE       0     0     0

errors: No known data errors
 
Now, during a nighly scub, another 12K was discovered and repaired. This was again on the same disk as previously and I am still wondering if this is not some leftover of the previously described issue. Perhaps something that was not caught last time? According to Yikues, scrub repaired 172K it could be anything or nothing since I am running server-grade hardware with ECC memory.  Either way, out of precaution I am doing the following:
  • create a snapshot,
  • refresh my backup,
  • schedule a long SMART test and
  • (if time allows) run a memtest.

Note: I know that some just love recommending running a memtest. However, looking at the issue, statistically, it is extremely unlikely that it is a memory issue as proper memory - which server-grade memory is  - should pass qality checks after manufacturing and they really rarely go bad.

If the SMART tests will be passed, I will call it a day and keep observing the system. If the SMART test throws back some errors or if the error happens another time on the same drive, I will contact the retailer as the drive is well withing garantee.

Drive S.M.A.R.T. status 

Checking the drive SMART status with

 sudo smartctl -a /dev/ada1 
revealed no apparent errors with the disk. SMART tests previously all completed without errors.

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      1404         -
# 2  Extended offline    Completed without error       00%      1329         -
# 3  Short offline       Completed without error       00%      1164         -
# 4  Short offline       Completed without error       00%       996         -
# 5  Short offline       Completed without error       00%       832         -
# 6  Short offline       Completed without error       00%       664         -
# 7  Short offline       Completed without error       00%       433         -
# 8  Short offline       Completed without error       00%       265         -
# 9  Extended offline    Completed without error       00%       190         -
#10  Extended offline    Completed without error       00%        18         -
#11  Short offline       Completed without error       00%         0         -

Memtest

Came back clean. I am not particularly surprised here.

Status on 08 November 2020

A few months have passed since I started writing this post. In the meanwhile I was monitoring the situation and did not discover any further issues. The pool is running fine and no further scrubs reported any errors. I am therefore concluding that the issue was caused most likely by the above malfunction and has nothing to do with the drive itself.


Friday, July 24, 2020

Syncthing jail on FreeNAS: "RuntimeError: mount_nullfs:"


Reason

This error arises when one tries to first stop a jail, add new mount point and then try to start the jail back up. This then presents an error message and the jail will not start up. I presume in my FreeNAS-11.2-U8 the underlying issue is that adding a new mount point does not automatically create the destination directory, hence the error message.
Runtime error message when trying to start up the jail upon folloing incorrect steps to set up a new Syncthing sahre.
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 166, in call_method
    result = await self.middleware.call_method(self, message)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1093, in call_method
    return await self._call(message['method'], serviceobj, methodobj, params, app=app, io_thread=False)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1037, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1058, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 990, in run_in_proc
    return await async_run_in_executor(loop, executor, method, *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/middlewared/utils/asyncio_.py", line 41, in async_run_in_executor
    raise result
RuntimeError: mount_nullfs: /mnt/POOL/iocage/jails/Syncthing/root/media/Sync: Resource deadlock avoided
jail: /sbin/mount -t nullfs -o rw /mnt/POOL/Syncthing /mnt/POOL/iocage/jails/Syncthing/root/media/Sync: failed

Solution

It follows then that the correct way to add a new share to Syncthing in an iocage jail is to first create the directory inside the jail itself where the dataset will be mounted to. This is the crucial step. On my system, by default, this was somewhere in 
/mnt/POOL/iocage/jail/Syncthing/root/media/Sync/SHARE-NAME
however, this may vary for you. So log in via ssh to you FreeNAS server, cd to the Syncthing jail and mkdir a new directory. After this, the jail can be stopped and a new moint point added where the source is the storage pool and the destination is the previously created new directory. Starting the jail afterwards will work just fine and the rest can be done from the Syncthing GUI.

If the above issue is already present, remove the "falsely added" mount point in the FreeNAS GUI and start again.

Thursday, June 14, 2018

Screen tearing in Ubuntu with xfce4 using intel HD4000 graphics

Recently I switched back from Unity desktop environment to xfce4 and I noticed a screen tearing when watching movies and even slightly when scrolling through websites. Needless to say, it is quite annoying and it should not be happening.

My laptop has a decent Intel core i5 CPU with Intel HD4000 graphics, which should be more than capable of playing back movies perfectly, let alone dispay websites. Hence the newly discovered screen tearing must be a side-effect of switching to xfce4. Initially I thought it was caused by some new driver that got installed or rather replaced during the switch, but there were no signs.

lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
dpkg -l | grep intel
ii  xserver-xorg-video-intel                                    2:2.99.917+git20160325-1ubuntu1.2                 amd64        X.Org X server -- Intel i8xx, i9xx display driver

Solution

After scouring the internet for possible causes such as malconfigured xorg config or wrong drivers and trying to fix these, I saw vertical sync mentioned somewhere and enabling it might help. In xfce4 this feature is found in Settings/Window Manager Tweaks/Compositor/Synchronize drawing to the vertical blank, see in the screenshot below.

Synchronize drawing to the vertical sync
Window manager Tweaks
Make sure to tick the selected "Synchronize drawing to the vertical blank" tickbox and the tearing should be gone.

Tuesday, June 12, 2018

Set date and time in raspbian

Problem

My new installation of raspbian stretch had its time off. The easiest way of setting this right I found was using timedatectl.

timedatectl
      Local time: Tue 2018-06-12 18:27:41 UTC
  Universal time: Tue 2018-06-12 18:27:41 UTC
        RTC time: n/a
       Time zone: Etc/UTC (UTC, +0000)

Network time on: yes
NTP synchronized: yes

RTC in local TZ: no

The date was correct but the actual time was off. Checking the output of timedatectl, the time zone was here off, so that was the obvious problem.

Solution

Checking the help of timedatectl quickly led to a resolution.

timedatectl -h
timedatectl [OPTIONS...] COMMAND ...

Query or change system time and date settings.

  -h --help                Show this help message
     --version             Show package version
     --no-pager            Do not pipe output into a pager
     --no-ask-password     Do not prompt for password
  -H --host=[USER@]HOST    Operate on remote host
  -M --machine=CONTAINER   Operate on local container
     --adjust-system-clock Adjust system clock when changing local RTC mode

Commands:
  status                   Show current time settings
  set-time TIME            Set system time
  set-timezone ZONE        Set system time zone
  list-timezones           Show known time zones
  set-local-rtc BOOL       Control whether RTC is in local time
  set-ntp BOOL             Enable or disable network time synchronization

  1. List the available timezones that can be set with,
    timedatectl list-timezones
    and scroll through the list to find the correct one.
  2. Set your current timezone with,
    timedatectl set-timezone Zone/City
No reboot is required, changes take effect right away.

Thursday, May 10, 2018

Install Gridcoinresearch wallet on the Raspberry Pi 3B

I previously wrote about how to get Gridcoinresearch running under Ubuntu. Since then, the project has advanced and in most cases you can simply install the wallet from a PPA.
I came to realize that this was not the case with my Raspberyy Pi 3 B, running Rasbian stretch. When trying to add a PPA as recommended by Launchpad Gridcoin-stable:
Stable builds for ordinary users for i386, amd64 and armhf. This PPA will lag leisure releases by up to one day to ensure stability. Mandatory upgrades will be released immediately.
The following happens.

sudo add-apt-repository ppa:gridcoin/gridcoin-stableTraceback (most recent call last):  File "/usr/bin/add-apt-repository", line 95, in <module>    sp = SoftwareProperties(options=options)  File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 109, in __init__    self.reload_sourceslist()  File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 599, in reload_sourceslist    self.distro.get_sources(self.sourceslist)  File "/usr/lib/python3/dist-packages/aptsources/distro.py", line 89, in get_sources    (self.id, self.codename))aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Raspbian/stretch

Hence, lets compile from source.

Update 12/06/2018

The latest release of Gridcoin switched to using autogen instead of the old make -f makefile process. The install and update process is less tedious this was, but it takes quite long on a Raspberry Pi 3B. The general method for building and installing it is,
./autogen.sh 
./configure
make
make install
However, to succeed on Raspbian, some packages need to be installed first. Proceed as follows.
 

1.

BerkleyDB4.8  needs to be installed, exactly! So basically what is available from apt (libdb5.x) is not good. I found a good resource for installing this from source at askubuntu.com. Below is the solution that worked on my Raspberry Pi.
wget 'http://download.oracle.com/berkeley-db/db-4.8.30.NC.tar.gz'tar -xzvf db-4.8.30.NC.tar.gz
cd db-4.8.30.NC/build_unix/

../dist/configure --enable-cxx

make
make install

And then tell the system where to find db4.8
export BDB_INCLUDE_PATH="/usr/local/BerkeleyDB.4.8/include"
export BDB_LIB_PATH="/usr/local/BerkeleyDB.4.8/lib"
ln -s /usr/local/BerkeleyDB.4.8/lib/libdb-4.8.so /usr/lib/libdb-4.8.so

2.

After this the building can begin. The first few steps are simple,
sudo apt-get install autoconf
git clone https://github.com/gridcoin-community/Gridcoin-Research
cd Gridcoin-Research./autogen.sh
When trying to run ./configure I got into trouble with BerkleyDB4.8 still not being detected or perhaps superceeded by a higher version. The solution here was to set the environment variable and link CPPFLAGS and LDFLAGS before running ./configure as,
env CPPFLAGS='-I/usr/local/BerkeleyDB.4.8/include' LDFLAGS='-L/usr/local/BerkeleyDB.4.8/lib' ./configure
Finally,
make

Note: As usual, you can run make -j4 to run it on 4 threads, however I had a lot of system hungs with my Raspberry Pi at this point. Maybe the RAM ran out, I am not certain. See if it works for you, but I would recommend you set up some temporary swap as descibed below in "Setting up additional Swap".


--- UPDATE END ---

TL;DR

For those who understand commands and want to get things running qucikly.

sudo apt-get install ntp git build-essential libssl-dev libdb-dev libdb++-dev libqrencode-dev libcurl4-openssl-dev libzip4 libzip-dev libboost1.62-all-dev libminiupnpc-dev
sudo dd if=/dev/zero of=/mnt/2GB.swap bs=1024 count=2097152
sudo mkswap /mnt/2GB.swap
sudo swapon /mnt/2GB.swap
git clone https://github.com/gridcoin/Gridcoin-Research
cd ~/Gridcoin-Research/src
chmod 755 leveldb/build_detect_platform
make -f makefile.unix -j 2 USE_UPNP=- -e PIE=1
strip gridcoinresearchd
sudo cp gridcoinresearchd /usr/bin/gridcoinresearchd
sudo swapoff -a

Installation

Although, installing the client is described on the Github page [1] of the project and following it step-by-step will result, in most cases, in a perfect installation, for the raspberry pi it can get tricky.

Install dependencies

Nothing special here, just the dependencies. First do a 
sudo apt-get update && sudo apt-get upgrade
and then
sudo apt-get install ntp git build-essential libssl-dev libdb-dev libdb++-dev libqrencode-dev libcurl4-openssl-dev libzip4 libzip-dev libboost1.62-all-dev libminiupnpc-dev

Setting up additional Swap

I ran into issues with memory during compiling, especially with multiple threads. You can try to skip this step and see if it works, otherwise lets create a 2 GiB swap file. The commands take about 2-4 minutes to finish, so be patient.

sudo dd if=/dev/zero of=/mnt/2GB.swap bs=1024 count=2097152
sudo mkswap /mnt/2GB.swap 
sudo swapon /mnt/2GB.swap

This will create a 2GiB /mnt/2GB.swap file that will be used during compiling.

Clone and compile gridcoinresearchd from Github

Everything is ready to pull gridcoinresearchd from Github and proceed with building it.
git clone https://github.com/gridcoin/Gridcoin-Research
cd ~/Gridcoin-Research/src
chmod 755 leveldb/build_detect_platform 
make -f makefile.unix -j 2 USE_UPNP=- -e PIE=1
strip gridcoinresearchd
sudo cp gridcoinresearchd /usr/bin/gridcoinresearchd
To start gridcoinresearchd type
gridcoinresearchd 

Note: The compiling (make) takes a while. Be patient.

Tip: "gridcoinresearchd" is a system command now and can be called quickly by starting to type it "grid..." and pressing [TAB]. Also, type gridcoinresearchd help to get a list of available commands with the headless client. The -j 2 option tells make to use 2 CPU cores, this helps speeding up the compiling. Some general practice says that it is fine to use number of CPU x 1.5 to get a good parallel performance, however when using 4-6 threads I ran out of memory and got errors.

Turn off swap

Swap is no longer needed after compiling, so it can be turned off. This helps prolonging the SD card's lifespan by preventing writing to it too often.
sudo swapoff -a

Troubleshooting

If any dependencies are missing (or, of course, something else is wrong) you will be greeted by an army of error messages. For example,
gridcoin upgrader.h:4:42: fatal error: curl/curl.h: No such file or directory  #include <curl/curl.h> // for downloading
means that there is an issue with curl. It took me a while, but it was not actually curl, but libcurl that has been missing. After installing it with sudo apt-get install libcurl4-openssl-dev, the compiling got past this point.

Or something like this
gridcoin compilation terminated. makefile.unix:137: recipe for target 'obj/rpcrawtransaction.o' failed make: *** [obj/rpcrawtransaction.o] Error 1
Here if I remember the issue was a missing boost library. sudo apt-get install libboost1.62-all-dev fixed the problem in this case.

Of course, these are just two examples, but the rule applies. Check the last few lines of the error message, and see what is missing. You can check whether a package is installed with dpkg -l | grep <PACKAGENAME>.

Of course, if you get stuck somewhere you can feel free to leave a comment below.

Sources
[1] - https://github.com/gridcoin/Gridcoin-master/blob/master/CompilingGridcoinOnLinux.txt