Pricing an option on a cumulative sum.

Consider an option with payout on an underlying which is a cumulative sum. For example, a construction company might want to mitigate the risk of rainfall causing delays in a project. The construction company could buy a call option on the cumulative sum of rainfall not exceeding 180mm on any 3-day period. Further, we restrict this so that one day can only be used in one payoff. If the cumulative rainfall $r$ a 3-day period exceeded 180mm, the entity selling the call option would have the obligation to pay the construction company based on the difference $r – 180$.

As the entity selling one of these options, we need to work out the expected loss and worst possible scenario for payouts.

Let $r_0, r_1, \ldots$ be the sequence of daily rainfall. Form the 3-sums:

\[
s_0 = r_0+r_1+r_2, s_1 = r_1+r_2+r_3, s_2 = r_2+r_3+r_4, \ldots
\]

Let $C > 0$ be the cutoff (e.g. 180mm/day). Then form the sequence $w_0, w_1, \ldots$ where

\[
w_i = f_C(s_i)
\]

where $f_C(x)$ is $0$ if $x \leq C$ and $x$ otherwise.

As the seller of the option, the worst outcome for us (i.e. largest payout) corresponds to the subsequence

\[
w_{k_1}, w_{k_2}, \ldots w_{k_n}
\]

that maximises the sum $\sum_{l=1}^n w_{k_l}$ for some $n$, where the terms $w_{k_l}$ are mutually disjoint (i.e. we don’t take the 3-sum of any days that overlap). We consider this sum over one year, but the same approach applies to monthly, quarterly, etc.

Formulation as a graph problem

Label vertices with the 3-sums $w_i$ and create an edge $(w_i, w_j)$ if the sums $w_i$ and $w_j$ refer to overlapping time periods. Then our problem is equivalent to finding a Maximum Weighted Independent Set in the graph. This is a classic NP-hard problem.

Formulation as a linear programming problem

Create binary decision variables $x_0, x_1, \ldots$ and optimise

\[ \sum{w_i x_i} \]

subject to

\[
x_i + x_j \leq 1
\]

when $w_i$ and $w_j$ refer to non-disjoint cumulative 3-sums.

Python solver

Using PuLP we can very quickly write a solver in Python.

First set up a solver:

prob = pulp.LpProblem('maxcsums', pulp.LpMaximize)

Create binary variables corresponding to each vertex:

x_vars = [ pulp.LpVariable('x' + str(i), cat='Binary')
             for (i, _) in enumerate(weights) ]

Use reduce to form the sum \( \sum w_i x_i \):

prob += reduce(lambda a,b: a+b,
                   [ weights[i]*x_vars[i] for i in range(len(weights))])

Finally, given a list of indices (3-tuples), add the constraints to avoid solutions with adjacent vertices:

for (i, x) in enumerate(indices):
  for (j, y) in enumerate(indices):
    if x < y:
      if is_adjacent(x, y):
        prob += x_vars[i] + x_vars[j] <= 1

Changi daily rainfall

For a concrete example, let’s use the daily rainfall data for Changi in Singapore.

Here are the 3-sums:



For cutoffs from 100 to 300, here are the number of extreme events per year:

     100 120 140 160 180 200 220 240 260 280 300
1980   3   3   2   1   1   1   1   1   1   0   0
1981   1   1   1   1   0   0   0   0   0   0   0
1982   2   1   1   1   1   1   1   1   0   0   0
1983   2   2   2   2   2   1   0   0   0   0   0
1984   6   4   2   1   1   1   0   0   0   0   0
1985   1   1   0   0   0   0   0   0   0   0   0
1986   4   4   4   3   1   0   0   0   0   0   0
1987   4   3   2   2   1   1   1   1   1   0   0
1988   9   4   2   1   1   0   0   0   0   0   0
1989   3   1   1   1   1   1   1   1   0   0   0
1990   2   1   1   1   0   0   0   0   0   0   0
1991   4   3   1   0   0   0   0   0   0   0   0
1992   6   3   2   2   1   1   1   0   0   0   0
1993   1   0   0   0   0   0   0   0   0   0   0
1994   4   3   2   2   2   1   0   0   0   0   0
1995   3   3   3   2   2   2   1   0   0   0   0
1996   1   0   0   0   0   0   0   0   0   0   0
1997   0   0   0   0   0   0   0   0   0   0   0
1998   3   3   1   1   0   0   0   0   0   0   0
1999   2   0   0   0   0   0   0   0   0   0   0
2000   2   2   0   0   0   0   0   0   0   0   0
2001   5   3   2   2   2   2   2   2   2   1   0
2002   4   1   1   1   1   0   0   0   0   0   0
2003   5   1   1   1   1   1   0   0   0   0   0
2004   6   4   2   2   1   1   1   0   0   0   0
2005   2   1   1   1   0   0   0   0   0   0   0
2006   5   5   5   5   3   2   2   1   1   1   1
2007   7   5   4   3   1   1   1   1   1   0   0
2008   4   3   1   0   0   0   0   0   0   0   0
2009   1   1   0   0   0   0   0   0   0   0   0
2010   4   1   0   0   0   0   0   0   0   0   0
2011   4   2   1   1   1   1   1   1   1   1   0
2012   3   0   0   0   0   0   0   0   0   0   0
2013   5   3   3   1   0   0   0   0   0   0   0
2014   0   0   0   0   0   0   0   0   0   0   0
2015   0   0   0   0   0   0   0   0   0   0   0
2016   1   0   0   0   0   0   0   0   0   0   0

We will go with 180mm as the cutoff. Ultimately, choosing the strike is a business decision.

The burn is $\texttt{max}(w_i – 180, 0)$, indicating the payout that the seller of the option would be liable for. Here is the daily burn for Changi daily rainfall (zero rows are not shown):

            c3sum   burn
DATE                    
1980-01-20  275.5   95.5
1982-12-22  245.4   65.4
1983-08-22  190.6   10.6
1983-12-26  218.8   38.8
1984-02-02  219.3   39.3
1986-12-11  186.7    6.7
1987-01-11  260.5   80.5
1988-09-21  188.9    8.9
1989-12-01  252.5   72.5
1992-11-11  223.2   43.2
1994-11-11    209   29.0
1994-12-19  180.7    0.7
1995-11-02  203.4   23.4
1995-02-06  236.9   56.9
2001-01-16  268.7   88.7
2001-12-27  281.4  101.4
2002-01-25  182.1    2.1
2003-01-31  210.5   30.5
2004-01-25  237.9   57.9
2006-12-19  356.4  176.4
2006-12-26  222.7   42.7
2006-01-09  190.7   10.7
2007-01-12  260.4   80.4
2011-01-31  298.2  118.2

The yearly burn is $\sum_i{\texttt{max}(w_i-180,0)}$ where the $w_i$ are in a particular calendar year. Plotting the yearly burn:



The histogram of burn values shows that a value around $80$ is the most frequent total yearly burn:



Cumulative histogram of the yearly burn:



From the yearly burn we can compute the expected loss over various periods:


Expected loss (all years):     71.13
Expected loss (last 10 years): 86.22
Expected loss (last 20 years): 71.13

We see that the expected loss over the last 10, 20 years differs by about 15.0.
The 99th percentile is p99 = 223.051.

Unlike pricing options on a liquid instrument such as an interest rate swap, stock, currencies, etc, the pricing of this option is set in an incomplete market. There is no way to hedge, so the Black Scholes formula cannot be used. Typically the price will be set as

\[
f(\texttt{expected loss}, \texttt{high value on the tail})
\]

where the high value on the tail is p99 or p99.9 (or similar); the function $f$ will integrate the price maker’s risk appetite, but it will be approximately

\[
\texttt{option price} \approx \texttt{expected loss} + \Delta
\]

Notes

Source code is on Github: https://github.com/carlohamalainen/maxc3sum

Most of the data manipulation is done using Python Pandas which makes lots of things one-liners, e.g. to sum the daily burn to yearly burn, given an index of datetime.date, we can group by year and then sum the result:

    yearly_burn = daily_burn.groupby(lambda x: x.year).sum()

To set the gap for GLPK, pass the command line option when setting up the initial GLPK object:

    pulp.GLPK(options=['--mipgap', 0.1]).solve(prob)

Naturally, finding a good gap is problem-specific and requires some experimentation.

Push to Amazon Glacier using amazonka

Here is a small Haskell package for pushing files to Amazon Glacier: https://github.com/carlohamalainen/glacier-push. It uses Brendan Hay’s amazonka API, in particular amazonka-glacier.

One thing that I couldn’t find in amazonka was a way to calculate the tree hash of a file. The Glacier API needs this for each part that is uploaded as well as the whole file. Amazon explains how to calculate the tree hash in their Glacier docs and provides sample code in Java and C++. Since the algorithm is recursive, it is quite short in Haskell:

oneMb :: Int64
oneMb = 1024*1024

treeHash :: BS.ByteString -&gt; Hash
treeHash s = treeHash' $ map sha256 $ oneMbChunks s
  where
    treeHash' []  = error &quot;Internal error in treeHash'.&quot;
    treeHash' [x] = B16.encode x
    treeHash' xs  = treeHash' $ next xs

    next :: [BS.ByteString] -&gt; [BS.ByteString]
    next []       = []
    next [a]      = [a]
    next (a:b:xs) = sha256 (BS.append a b) : next xs

    oneMbChunks :: BS.ByteString -&gt; [BS.ByteString]
    oneMbChunks x
      | BS.length x &lt;= oneMb = [x]
      | otherwise            = BS.take oneMb x : oneMbChunks (BS.drop oneMb x)

    sha256 :: BS.ByteString -&gt; BS.ByteString
    sha256 = cs . SHA256.hashlazy

To push a large file to Glacier we do three things: initiate the multipart upload, upload each part (say, 100Mb chunks), and then finalize the upload.

Initiate the multipart upload

We do this to get an

uploadId

which is then used for each of the multipart uploads. We use initiateMultipartUpload, and need to set the part size.

initiateMulti env vault _partSize = send' env mpu
  where
    mpu = initiateMultipartUpload accountId vault
            &amp; imuPartSize .~ (Just $ cs $ show _partSize)

Upload the parts

With the response from initiating the multipart upload (the

mu

parameter in

uploadOnePart

) we can push a single part using uploadMultipartPart. Here we have to also set the checksum and content range:

uploadOnePart env vault mu p = do
    let Part{..} = p

    body &lt;- toHashed &lt;$&gt; getPart _path (_partStart, _partEnd)

    uploadId &lt;- case mu ^. imursUploadId of
                    Nothing     -&gt; throw MissingUploadID
                    Just uid    -&gt; return uid

    let ump = uploadMultipartPart accountId vault uploadId body
                &amp; umpChecksum .~ (Just $ cs $ p ^. partHash)
                &amp; umpRange    .~ Just cr

    send' env ump

  where

    contentRange :: Int64 -&gt; Int64 -&gt; Text
    contentRange x y = &quot;bytes &quot; `append` cs (show x) `append` accountId `append` cs (show y) `append` &quot;/*&quot;

Complete the multipart upload

Completing the multipart upload lets Glacier know that it should start its job of assembling all the parts into a full archive. We have to set the archive size and the tree hash of the entire file.

completeMulti env vault mp mu = do
    uploadId &lt;- case mu ^. imursUploadId of
                    Nothing     -&gt; throw MissingUploadID
                    Just uid    -&gt; return uid

    let cmu = completeMultipartUpload accountId vault uploadId
                &amp; cmuArchiveSize .~ (Just $ cs $ show $ mp ^. multipartArchiveSize)
                &amp; cmuChecksum    .~ (Just $ cs $ mp ^. multipartFullHash)

    send' env cmu

Notes

In each of these functions I used a helper for sending the request:

send' env x = liftIO $ runResourceT . runAWST env $ send x

I run the main block of work in a

KatipContextT

since I am using katip for structured logging. Adding new key-value info to the log context is accomplished using katipAddContext.

go vault' path = do
    $(logTM) InfoS &quot;Startup.&quot;

    let vault = cs vault'

    let myPartSize = 128*oneMb

    mp  &lt;- liftIO $ mkMultiPart path myPartSize

    env &lt;- liftIO $ newEnv'
    mu  &lt;- liftIO $ initiateMulti env vault myPartSize

    let uploadId = fromMaybe (error &quot;No UploadId in response, can't continue multipart upload.&quot;)
                 $ mu ^. imursUploadId

    partResponses &lt;- forM (mp ^. multiParts) $ \p -&gt;
        katipAddContext (sl &quot;uploadId&quot; uploadId) $
        katipAddContext (sl &quot;location&quot; $ fromMaybe &quot;(nothing)&quot; $ mu ^. imursLocation) $ do
            doWithRetries 10 (uploadOnePart env vault mu p)

    case lefts partResponses of
        []   -&gt; do $(logTM) InfoS &quot;All parts uploaded successfully, now completing the multipart upload.&quot;
                   catch (do completeResponse &lt;- completeMulti env vault mp mu
                             katipAddContext (sl &quot;uploadId&quot; uploadId)                             $
                              katipAddContext (sl &quot;archiveId&quot; $ completeResponse ^. acoArchiveId) $
                               katipAddContext (sl &quot;checksum&quot; $ completeResponse ^. acoChecksum ) $
                                katipAddContext (sl &quot;location&quot; $ completeResponse ^. acoLocation) $ do
                                  $(logTM) InfoS &quot;Done&quot;
                                  liftIO exitSuccess)
                         (\e -&gt; do logServiceError &quot;Failed to complete multipart upload.&quot; e
                                   $(logTM) ErrorS &quot;Failed.&quot;
                                   liftIO exitFailure)

        errs -&gt; do forM_ errs (logServiceError &quot;Failed part upload.&quot;)
                   $(logTM) ErrorS &quot;Too many part errors.&quot;
                   liftIO exitFailure

logServiceError msg (ServiceError e)
    = let smsg :: Text
          smsg = toText $ fromMaybe &quot;&quot; $ e ^. serviceMessage

          scode :: Text
          scode = toText $ e ^. serviceCode

        in katipAddContext (sl &quot;serviceMessage&quot; smsg) $
            katipAddContext (sl &quot;serviceCode&quot; scode)  $
             (headersAsContext $ e ^. serviceHeaders) $
               $(logTM) ErrorS msg

logServiceError msg (TransportError e)
    = let txt :: Text
          txt = toText $ show e
        in katipAddContext (sl &quot;TransportError&quot; txt) $
            $(logTM) ErrorS msg

logServiceError msg (SerializeError e)
    = let txt :: Text
          txt = toText $ show e
        in katipAddContext (sl &quot;SerializeError&quot; txt) $
            $(logTM) ErrorS msg

I found it handy to write this little helper function to turn each header from an amazonka

ServiceError

into a Katip context key/value pair:

headersAsContext :: KatipContext m =&gt; [Header] -&gt; m a -&gt; m a
headersAsContext hs = foldl (.) id $ map headerToContext hs
  where
    headerToContext :: KatipContext m =&gt; Header -&gt; m a -&gt; m a
    headerToContext (h, x) = let h' = cs $ CI.original h :: Text
                                 x' = cs x               :: Text
                               in katipAddContext (sl h' x')

Katip can write to ElasticSearch using katip-elasticsearch. Then you’d be able to search for errors on specific header fields, etc.

Sample run

glacier-push-exe basement myfile
[2017-08-30 12:44:47][glacier-push.main][Info][x4][1724][ThreadId 7][main:Main app/Main.hs:300:7] Startup.
[2017-08-30 12:44:50][glacier-push.main][Info][x4][1724][ThreadId 7][location:/998720554704/vaults/basement/multipart-uploads/vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][partEnd:134217727][partStart:0][uploadId:vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][main:Main app/Main.hs:213:15] Uploading part.
[2017-08-30 12:45:45][glacier-push.main][Info][x4][1724][ThreadId 7][location:/998720554704/vaults/basement/multipart-uploads/vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][partEnd:268435455][partStart:134217728][uploadId:vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][main:Main app/Main.hs:213:15] Uploading part.
[2017-08-30 12:46:37][glacier-push.main][Info][x4][1724][ThreadId 7][location:/998720554704/vaults/basement/multipart-uploads/vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][partEnd:293601279][partStart:268435456][uploadId:vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][main:Main app/Main.hs:213:15] Uploading part.
[2017-08-30 12:46:55][glacier-push.main][Info][x4][1724][ThreadId 7][main:Main app/Main.hs:320:22] All parts uploaded successfully, now completing the multipart upload.
[2017-08-30 12:46:57][glacier-push.main][Info][x4][1724][ThreadId 7][archiveId:bImG6jM0eQGNC7kIJTsC_wtcAXdPDUtJ-NyfstrkxeyTtXC_iEgkvenH-h_eQH-LYbhVKWJM7WuZlb7OHKtgKJNEpOtVaqxEhlNRTHphUtLCurcHAQDHKkiTnIXTpFxgPgvP9Q0axA][checksum:4f08645d8f3705dc222eef7547591c400362806abb7a6298b9267ebf2be7d901][location:/998720554704/vaults/basement/archives/bImG6jM0eQGNC7kIJTsC_wtcAXdPDUtJ-NyfstrkxeyTtXC_iEgkvenH-h_eQH-LYbhVKWJM7WuZlb7OHKtgKJNEpOtVaqxEhlNRTHphUtLCurcHAQDHKkiTnIXTpFxgPgvP9Q0axA][uploadId:vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs][main:Main app/Main.hs:326:37] Done

The lines are pretty long (as they are intended for consumption into ElasticSearch, not human parsing) so here is one with line breaks:

[2017-08-30 12:45:45]
 [glacier-push.main]
 [Info]
 [x4]
 [1724]
 [ThreadId 7]
 [location:/998720554704/vaults/basement/multipart-uploads/vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs]
 [partEnd:268435455]
 [partStart:134217728]
 [uploadId:vZMCGNsLGhfTJ_hJ-CJ_OF_juCAY1IaZDl_A3YqOZXnuQRH_AtPiMaUE-K1JDew-ZwiIuDZgR3QbjsJIEfWtGeMNeKDs]
 [main:Main app/Main.hs:213:15] Uploading part.

Checking out and compiling

Use Stack. Then:

$ git clone https://github.com/carlohamalainen/glacier-push.git
$ cd glacier-push
$ stack build

To browse the source on github, have a look at:

FSA B3164 bottom bracket replacement

My Boardman Team TXC 650b hardtail mountain bike has an FSA crankset and bottom bracket. After a year and a half the bottom bracket got quite rough so I decided to swap it out with a Hope Hollowtech II bottom bracket.

I couldn’t find much online about this bottom bracket. Markings include: “FSA B3164 MegaExo 24mm MS185” and “BC1.37″ x 24T”.

I replaced it with this Hope bottom bracket, the 68/73mm model.

hope

It comes with three spacers but not all are needed.

The drive side cup of the new bottom bracket screwed in easily but I stripped the thread of the bottom bracket on the non-drive side, probably due to gunk in the threads.

I managed to get the cup out and then used a dremel (on low speed) to clean up the threads.

To chase the threads, I made a few cuts into the new bottom bracket cup (following this video), which let me push into the bottom bracket enough to bite into the thread. If you do this you’ll probably destroy the thread completely, rendering the frame junk.

 

ghc-imported-from => ghc-mod (August 2017)

I have a pull request to merge ghc-imported-from into ghc-mod. The main benefit of being part of ghc-mod is that I don’t have to duplicate ghc-mod’s infrastructure for handling sandboxes, GHC options, interfaces to other build tools like Stack, and compatibility with more versions of GHC.

The pull request is still under review, so until then you can try it out by cloning the development branches:

git clone -b imported-from https://github.com/DanielG/ghc-mod.git ghc-mod-imported-from
cd ghc-mod-imported-from
cabal update && cabal sandbox init && cabal install
export PATH=`pwd`/.cabal-sandbox/bin:$PATH

Assuming that you use Plugged for managing Vim/Neovim plugins, use my branch of ghcmod-vim by adding this to your vimrc:

call plug#begin('~/.vim/plugged')

Plug 'carlohamalainen/ghcmod-vim', { 'branch': 'ghcmod-imported-from-cmd', 'for' : 'haskell' }

Install the plugin with :PlugInstall in vim.

Recently,

xdg-open

stopped working for me (others have had the same issue) so I recommend setting

ghcmod_browser

in your vimrc:

let g:ghcmod_browser = '/usr/bin/firefox'

Here are some handy key mappings:

au FileType  haskell nnoremap  :GhcModType
au FileType  haskell nnoremap  :GhcModInfo
au FileType  haskell nnoremap  :GhcModTypeClear

au FileType lhaskell nnoremap  :GhcModType
au FileType lhaskell nnoremap  :GhcModInfo
au FileType lhaskell nnoremap  :GhcModTypeClear

au FileType haskell  nnoremap  :GhcModOpenDoc
au FileType lhaskell nnoremap  :GhcModOpenDoc

au FileType haskell  nnoremap  :GhcModDocUrl
au FileType lhaskell nnoremap  :GhcModDocUrl

au FileType haskell  vnoremap  :GhcModOpenHaddockVismode
au FileType lhaskell vnoremap  :GhcModOpenHaddockVismode

au FileType haskell  vnoremap  :GhcModEchoUrlVismode
au FileType lhaskell vnoremap  :GhcModEchoUrlVismode

On the command line, use the imported-from command. It tells you the defining module, the exporting module, and the Haddock URL:

$ ghc-mod imported-from Foo.hs 9 34
base-4.8.2.0:System.IO.print Prelude /opt/ghc/7.10.3/share/doc/ghc/html/libraries/base-4.8.2.0/Prelude.html

From Vim/Neovim, navigate to a symbol and hit F4 which will open the Haddock URL in your browser, or F5 to echo the command-line output.

Ubuntu 17 – device not managed

I plugged in a D-Link DUB-1312 to my laptop running Ubuntu Zesty but Network Manager said that the interface was “not managed”.

The fix, found here, is to remove the contents of one file. Better to save the original file and touch an empty one:

$ sudo mv    /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf{,_ORIGINAL}
$ sudo touch /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

For reference, here’s the info about the DUB-1312 USB ethernet adapter:

$ sudo apt update
$ sudo apt install hwinfo
$ sudo hwinfo --netcard

(other output snipped)

40: USB 00.0: 0200 Ethernet controller
  [Created at usb.122]
  Unique ID: VQs5.d0KcpDt5qE6
  Parent ID: 75L1.MLPSY0FvjsF
  SysFS ID: /devices/pci0000:00/0000:00:14.0/usb2/2-6/2-6.4/2-6.4.3/2-6.4.3:1.0
  SysFS BusID: 2-6.4.3:1.0
  Hardware Class: network
  Model: "D-Link DUB-1312"
  Hotplug: USB
  Vendor: usb 0x2001 "D-Link"
  Device: usb 0x4a00 "D-Link DUB-1312"
  Revision: "1.00"
  Serial ID: "000000000005FA"
  Driver: "ax88179_178a"
  Driver Modules: "ax88179_178a"
  Device File: enxe46f13f4be18
  HW Address: e4:6f:13:f4:be:18
  Permanent HW Address: e4:6f:13:f4:be:18
  Link detected: yes
  Module Alias: "usb:v2001p4A00d0100dcFFdscFFdp00icFFiscFFip00in00"
  Driver Info #0:
    Driver Status: ax88179_178a is active
    Driver Activation Cmd: "modprobe ax88179_178a"
  Config Status: cfg=new, avail=yes, need=no, active=unknown
  Attached to: #33 (Hub)

Structured logging to AWS ElasticSearch

A while ago I wrote about how to set up a structured logging service using PostgreSQL. AWS now makes it possible to have the same functionality (plus more) in the “serverless” style. For background on the idea of serverless architecture, watch this talk: GOTO 2017 • Serverless: the Future of Software Architecture • Peter Sbarski. Parts of this post are based on this guide on serverless AWS lambda elasticsearch and kibana.

First, create an Amazon Elasticsearch Service Domain. I used the smallest instance size since it’s just for personal use. Full docs are here.

For programmatic access control, create an AWS IAM user and make a note of its “arn” identifier, e.g.

arn:aws:iam::000000000000:user/myiamuser

. Then add an access policy as follows. We also add access to our IP address for the kibana interface. I made an ElasticSearch domain called “logs”; see the Resource field below:

{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Principal&quot;: {
        &quot;AWS&quot;: &quot;arn:aws:iam::000000000000:user/myiamuser&quot;
      },
      &quot;Action&quot;: &quot;es:*&quot;,
      &quot;Resource&quot;: &quot;arn:aws:es:ap-southeast-1:000000000000:domain/logs/*&quot;
    },
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Principal&quot;: {
        &quot;AWS&quot;: &quot;*&quot;
      },
      &quot;Action&quot;: &quot;es:*&quot;,
      &quot;Resource&quot;: &quot;arn:aws:es:ap-southeast-1:000000000000:domain/logs/*&quot;,
      &quot;Condition&quot;: {
        &quot;IpAddress&quot;: {
          &quot;aws:SourceIp&quot;: &quot;xxx.xxx.xxx.xxx&quot;
        }
      }
    }
  ]
}

To post to the ElasticSearch instance we use

requests-aws4auth

:

sudo pip install requests-aws4auth

Then we can post a document, a json blob, using the following script. Set the host, region, AWS key, and AWS secret key. This script saves the system temperature under an index

system-stats

with the ISO date attached.

import datetime 

from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth

HOST        = 'search-logs-xxxxxxxxxxxxxxxxxxxxxxxxxx.ap-southeast-1.es.amazonaws.com' # see 'Endpoint' in ES status page
REGION      = 'ap-southeast-1' # choose the correct region
AWS_KEY     = 'XXXXXXXXXXXXXXXXXXXX'
AWS_SECRET  = 'YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY'
 
def get_temp():
    return 42.0 # actually read from 'sensors' or similar

if __name__ == '__main__':
    now  = datetime.datetime.now()
    date = now.date().isoformat()

    doc = {'host': 'blah', 'temperature': get_temp(), 'datetime': now.isoformat()}

    awsauth = AWS4Auth(AWS_KEY, AWS_SECRET, REGION, 'es')

    es = Elasticsearch(
            hosts=[{'host': HOST, 'port': 443}],
            http_auth=awsauth, use_ssl=True, verify_certs=True,
            connection_class=RequestsHttpConnection)

    _index = 'system-stats-' + date
    _type  = 'temperature'
    print doc
    print es.index(index=_index, doc_type=_type, body=doc)

To query data we use elasticsearch-dsl.

sudo pip install elasticsearch-dsl
from elasticsearch import Elasticsearch
from elasticsearch import RequestsHttpConnection
from requests_aws4auth import AWS4Auth

from elasticsearch_dsl import Search

from datetime import datetime

HOST        = 'search-logs-xxxxxxxxxxxxxxxxxxxxxxxxxx.ap-southeast-1.es.amazonaws.com' # see 'Endpoint' in ES status page
REGION      = 'ap-southeast-1' # choose the correct region
AWS_KEY     = 'XXXXXXXXXXXXXXXXXXXX'
AWS_SECRET  = 'YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY'

awsauth = AWS4Auth(AWS_KEY, AWS_SECRET, REGION, 'es')

client = Elasticsearch(
            hosts=[{'host': HOST, 'port': 443}],
            http_auth=awsauth, use_ssl=True, verify_certs=True,
            connection_class=RequestsHttpConnection)

plot_date      = '2017-08-06'
monitored_host = 'blah'

s = Search(using=client, index='system-stats-' + plot_date) \
       .query('match', host=monitored_host)

response = s.execute()

xy = [(datetime.strptime(hit.datetime, '%Y-%m-%dT%H:%M:%S.%f'), hit.temperature) for hit in response]
xy = sorted(xy, key=lambda z: z[0])

for (x, y) in xy:
    print(x,y)

Sample output:

$ python3 dump.py 
2017-08-06 04:00:02.337370 32.0
2017-08-06 05:00:01.779796 37.0
2017-08-06 07:00:01.789370 37.0
2017-08-06 11:00:01.711586 40.0
2017-08-06 12:00:02.054906 42.0
2017-08-06 16:00:02.075869 44.0
2017-08-06 18:00:01.619764 43.0
2017-08-06 19:00:02.319470 38.0
2017-08-06 20:00:03.098032 43.0
2017-08-06 22:00:03.629017 43.0

For exploring the data you can also use kibana, which is included with the ElasticSearch service from AWS.

Another nifty thing about the AWS infrastructure is that you can use Lambda to create ElasticSearch entries when objects drop in an S3 bucket. More details in this post.

Stripping html tags using TagSoup

I had a situation, when converting old blog posts to WordPress, where I wanted to strip all the extra info on the

pre

tags. For example this:

&lt;pre&gt;&lt;code&gt;&lt;span style=&quot;&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span style=&quot;color: blue; font-weight: bold;&quot;&gt;import&lt;/span&gt; &lt;span style=&quot;&quot;&gt;Data&lt;/span&gt;&lt;span style=&quot;&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;&quot;&gt;Char&lt;/span&gt;

would turn into:

&gt;import Data.Char

It turns out that this is really easy using TagSoup.

module Detag where

import Control.Monad
import Text.HTML.TagSoup

The function to strip tags works on a list of tags of strings:

strip :: [Tag String] -&gt; [Tag String]

strip [] = []

If we hit a

pre

tag, ignore its info (the underscore) and continue on recursively:

strip (TagOpen &quot;pre&quot; _ : rest) = TagOpen &quot;pre&quot; [] : strip rest

Similarly, strip the info off an opening code tag:

strip (TagOpen  &quot;code&quot; _ : rest) = strip rest
strip (TagClose &quot;code&quot;   : rest) = strip rest

If we hit a span, followed by some text, and a closing span, then keep the text tag and continue:

strip (TagOpen &quot;span&quot; _ : TagText t : TagClose &quot;span&quot; : rest)
  = TagText t : strip rest

Don’t change other tags:

strip (t:ts) = t : strip ts

Parsing input from stdin is straightforward. We use

optEscape

and

optRawTag

to avoid mangling other html in the input.

main :: IO ()
main = do
    s &lt;- getContents
    let tags = parseTags s
        ropts = renderOptions{optEscape = id, optRawTag = const True}
    putStrLn $ renderTagsOptions ropts $ strip tags

Example output:

$ runhaskell Detag.hs 
&lt;pre class=&quot;sourceCode haskell&quot;&gt;&lt;code class=&quot;sourceCode haskell&quot;&gt;&lt;span style=&quot;&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span style=&quot;color: green;&quot;&gt;{-# LANGUAGE RankNTypes          #-}&lt;/span&gt;
&lt;pre&gt;&gt; {-# LANGUAGE RankNTypes          #-}

Data.Proxy

Short note on Data.Proxy based on this Stackoverflow answer.

First, a few imports:

> {-# LANGUAGE RankNTypes          #-}
> {-# LANGUAGE ScopedTypeVariables #-}
>
> module Proxy where
>
> import Data.Proxy
> import Text.Read

Suppose we want to check if some fuzzy real world data can be read as certain concrete types. We could write a few helper functions using readMaybe:

> readableAsInt :: String -> Bool
> readableAsInt s
>   = case readMaybe s of
>       Just (_ :: Int) -> True
>       _               -> False
>
> readableAsDouble :: String -> Bool
> readableAsDouble s
>   = case readMaybe s of
>       Just (_ :: Double) -> True
>       _                  -> False
>
> readableAsBool :: String -> Bool
> readableAsBool s
>   = case readMaybe s of
>       Just (_ :: Bool) -> True
>       _                -> False

These are all basically the same. How to generalise? Let’s try a typeclass.

> class ReadableAs t where
>    readableAs :: String -> Bool

This doesn’t work since readableAs doesn’t depend on the type t:

    The class method ‘readableAs’
    mentions none of the type or kind variables of the class ‘ReadableAs t’
    When checking the class method: readableAs :: String -> Bool
    In the class declaration for ‘ReadableAs’
Failed, modules loaded: none.

So put the type in:

> class ReadableAs' t where
>    readableAs' :: t -> String -> Bool

This compiles, so let’s write some instances:

> instance ReadableAs' Int where
>   readableAs' _ s
>      = case readMaybe s of
>          Just (_ :: Int) -> True
>          _               -> False
>
> instance ReadableAs' Double where
>   readableAs' _ s
>      = case readMaybe s of
>          Just (_ :: Double) -> True
>          _                  -> False
>
> instance ReadableAs' Bool where
>   readableAs' _ s
>      = case readMaybe s of
>          Just (_ :: Bool) -> True
>          _                -> False

Using it is clunky since we have to come up with a concrete value for the first argument:

 > readableAs' (0::Int) "0"
 True
 > readableAs' (0::Double) "0"
 True

For some types we could use Data.Default for this placeholder value. But for other types nothing will make sense. How do we choose a default value for Foo?

> data Foo = Cat | Dog

Haskell has non-strict evaluation so we can use undefined, but, ugh. Bad idea.

 > readableAs' (undefined::Int) "0"
 True

So let’s try out Proxy. It has a single constructor and a free type variable that we can set:

 > :t Proxy
 Proxy :: Proxy t
 > Proxy :: Proxy Bool
 Proxy
 > Proxy :: Proxy Int
 Proxy
 > Proxy :: Proxy Double
 Proxy

Let’s use Proxy t instead of t:

> class ReadableAsP t where
>    readableAsP :: Proxy t -> String -> Bool
>
> instance ReadableAsP Int where
>   readableAsP _ s
>      = case readMaybe s of
>          Just (_ :: Int) -> True
>          _               -> False
>
> instance ReadableAsP Double where
>   readableAsP _ s
>      = case readMaybe s of
>          Just (_ :: Double) -> True
>          _                  -> False
>
> instance ReadableAsP Bool where
>   readableAsP _ s
>      = case readMaybe s of
>          Just (_ :: Bool) -> True
>          _                -> False

This works, and we don’t have to come up with the unused concrete value:

 > readableAsP (Proxy :: Proxy Bool) "0"
 False
 > readableAsP (Proxy :: Proxy Bool) "True"
 True
 > readableAsP (Proxy :: Proxy Int) "0"
 True
 > readableAsP (Proxy :: Proxy Double) "0"
 True
 > readableAsP (Proxy :: Proxy Double) "0.0"
 True

Still, there’s a lot of duplication in the class and instances. We can do away with the class entirely. With the ScopedTypeVariables language extension and the forall, the t in the type signature can be referred to in the body:

> readableAs :: forall t. Read t => Proxy t -> String -> Bool
> readableAs _ s
>      = case readMaybe s of
>          Just (_ :: t) -> True
>          _             -> False
 > readableAs (Proxy :: Proxy Int) "0"
 True
 > readableAs (Proxy :: Proxy Int) "foo"
 False

Archived comments


Franklin Chen

2017-03-25 02:23:07.94742 UTC

This can also now be done without Proxy, thanks to explicit type application:

{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE AllowAmbiguousTypes #-}

import Data.Maybe (isJust)

readableAs :: forall t. Read t => String -> Bool
readableAs = isJust @t . readMaybe

example1 = readableAs @Int "0"
example2 = readableAs @Int "foo"
example3 = readableAs @Double "0.1"

ghc-imported-from => ghc-mod (March 2017)

I have a pull request to merge ghc-imported-from into ghc-mod. The main benefit of being part of ghc-mod is that I don’t have to duplicate ghc-mod’s infrastructure for handling sandboxes, GHC options, interfaces to other build tools like Stack, and compatibility with more versions of GHC.

The pull request is still under review, so until then you can try it out by cloning the development branches:

git clone -b imported-from https://github.com/DanielG/ghc-mod.git ghc-mod-imported-from
cd ghc-mod-imported-from
cabal update && cabal sandbox init && cabal install
export PATH=`pwd`/.cabal-sandbox/bin:$PATH

Assuming that you use Plugged for managing Vim/Neovim plugins, use my branch of ghcmod-vim by adding this to your vimrc:

call plug#begin('~/.vim/plugged')

Plug 'carlohamalainen/ghcmod-vim', { 'branch': 'ghcmod-imported-from-cmd', 'for' : 'haskell' }

Install the plugin with :PlugInstall in vim.

Here are some handy key mappings:

au FileType  haskell nnoremap            :GhcModType
au FileType  haskell nnoremap            :GhcModInfo
au FileType  haskell nnoremap    :GhcModTypeClear

au FileType lhaskell nnoremap            :GhcModType
au FileType lhaskell nnoremap            :GhcModInfo
au FileType lhaskell nnoremap    :GhcModTypeClear

au FileType haskell  nnoremap   :GhcModOpenDoc
au FileType lhaskell nnoremap   :GhcModOpenDoc

au FileType haskell  nnoremap   :GhcModDocUrl
au FileType lhaskell nnoremap   :GhcModDocUrl

au FileType haskell  vnoremap  : GhcModOpenHaddockVismode
au FileType lhaskell vnoremap  : GhcModOpenHaddockVismode

au FileType haskell  vnoremap  : GhcModEchoUrlVismode
au FileType lhaskell vnoremap  : GhcModEchoUrlVismode

On the command line, use the imported-from command. It tells you the defining module, the exporting module, and the Haddock URL:

$ ghc-mod imported-from Foo.hs 9 34 show
base-4.8.2.0:GHC.Show.show Prelude https://hackage.haskell.org/package/base-4.8.2.0/docs/Prelude.html

From Vim/Neovim, navigate to a symbol and hit F4 which will open the Haddock URL in your browser, or F5 to echo the command-line output.

Raspbian with full disk encryption

This blog post shows how to convert a standard Raspbian installation to full disk encryption. The encryption passphrase can be entered at the physical console or via a dropbear ssh session.

I mainly follow the Offensive Security guide.

What you need:

  • Raspberry Pi.
  • Laptop with a microSD card slot. I used my X1 Carbon running Ubuntu xenial (amd64).

First, install Raspbian. With a 32Gb microSD card the partitions are:

/dev/mmcblk0p2                29G  4.8G   23G  18% /media/carlo/7f593562-9f68-4bb9-a7c9-2b70ad620873
/dev/mmcblk0p1                63M   21M   42M  34% /media/carlo/boot

It’s a good idea to make a backup of the working installation:

dd if=/dev/mmcblk0 of=pi-debian-unencrypted-backup.img

Also make a note of the start/end of the main partition. This will be needed later.

Install the qemu static (on the laptop, not the Pi):

sudo apt update
sudo apt install qemu-user-static

Create directories for the chroot. Easiest to do all of this as root. Pop the sd card into the laptop and drop into a chroot:

mkdir -p pi/chroot/boot

mount /dev/mmcblk0p2 pi/chroot/
mount /dev/mmcblk0p1 pi/chroot/boot/

mount -t proc  none     pi/chroot/proc
mount -t sysfs none     pi/chroot/sys
mount -o bind /dev      pi/chroot/dev
mount -o bind /dev/pts  pi/chroot/dev/pts

cp /usr/bin/qemu-arm-static pi/chroot/usr/bin/
LANG=C chroot pi/chroot/

Next we need to install a few things in the chroot. If these fail with

root@x4:/# apt update
qemu: uncaught target signal 4 (Illegal instruction) - core dumped
Illegal instruction (core dumped)

then comment out the libarmmem line in ld.so.preload:

root@x4:~# cat pi/chroot/etc/ld.so.preload
#/usr/lib/arm-linux-gnueabihf/libarmmem.so

Install the things:

apt update
apt install busybox cryptsetup dropbear

To create an initramfs image we we need the kernel version. In my case
it is 4.4.50-v7+.

root@x4:/# ls -l /lib/modules/
total 8
drwxr-xr-x 3 root root 4096 Mar 11 20:31 4.4.50+
drwxr-xr-x 3 root root 4096 Mar 11 20:31 4.4.50-v7+

Create the image, enable ssh, and set the root password:

root@x4:/# mkinitramfs -o /boot/initramfs.gz 4.4.50-v7+
root@x4:/# update-rc.d ssh enable
root@x4:/# passwd

Set the boot command line. Previously I had:

root@x4:/# cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

The new one refers to the encrypted partition:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mapper/crypt_sdcard cryptdevice=/dev/mmcblk0p2:crypt_sdcard rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

Add this to the boot config:

echo "initramfs initramfs.gz 0x00f00000" >> /boot/config.txt

Cat the private key and copy to the laptop; save as pikey:

root@x4:/# cat /etc/initramfs-tools/root/.ssh/id_rsa

The Offensive Security guide talks about editing /etc/initramfs-tools/root/.ssh/authorized_keys so that the ssh login can only run /scripts/local-top/cryptroot. I had no luck getting this to work, running into weird issues with Plymouth. For some reason the password prompt appeared on the physical console of the Pi, not the ssh session. So I skipped this and manually use the dropbear session to enter the encryption passphrase.

Set /etc/fstab to point to the new root partition. Original file:

root@x4:/# cat /etc/fstab
proc            /proc           proc    defaults          0       0
/dev/mmcblk0p1  /boot           vfat    defaults          0       2
/dev/mmcblk0p2  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

New file (only one line changed, referring to /dev/mapper):

root@x4:/# cat /etc/fstab
proc            /proc           proc    defaults          0       0
/dev/mmcblk0p1  /boot           vfat    defaults          0       2
/dev/mapper/crypt_sdcard /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

Edit /etc/crypttab to look like this:

root@x4:/# cat /etc/crypttab
# 				
crypt_sdcard /dev/mmcblk0p2 none luks

The Offensive Security guide mentions that there can be issues with ports taking a while to wake up, so they recommend adding a 5 second sleep before the configure_networking line in /usr/share/initramfs-tools/scripts/init-premount/dropbear:

echo "Waiting 5 seconds for USB to wake"
sleep 5
configure_networking &

Regenerate the image:

root@x4:/# mkinitramfs -o /boot/initramfs.gz 4.4.50-v7+
device-mapper: table ioctl on crypt_sdcard failed: No such device or address
Command failed
cryptsetup: WARNING: failed to determine cipher modules to load for crypt_sdcard
Unsupported ioctl: cmd=0x5331

Now pop out of the chroot (Ctrl-D), unmount some things, and make a backup:

umount pi/chroot/boot
umount pi/chroot/sys
umount pi/chroot/proc
mkdir -p pi/backup
rsync -avh pi/chroot/* pi/backup/

umount pi/chroot/dev/pts
umount pi/chroot/dev
umount pi/chroot

Encrypt the partition, unlock it, and rsync the data back:

cryptsetup -v -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/mmcblk0p2
cryptsetup -v luksOpen /dev/mmcblk0p2 crypt_sdcard
mkfs.ext4 /dev/mapper/crypt_sdcard

mkdir -p pi/encrypted
mount /dev/mapper/crypt_sdcard pi/encrypted/
rsync -avh pi/backup/* pi/encrypted/

umount pi/encrypted/
cryptsetup luksClose /dev/mapper/crypt_sdcard
sync

Now put the sd card into the Pi and boot it up. If you see this on the console:

/scripts/local-top/cryptroot: line 1: /sbin/cryptsetup: not found

it means that the initramfs image didn’t include the cryptsetup binary. It is a known bug and the workaround that worked for me was:

echo "export CRYPTSETUP=y" >> /usr/share/initramfs-tools/conf-hooks.d/forcecryptsetup

(I had to do this in the chroot environment and rebuild the initramfs image. Ugh.)

For some reason I had Plymouth asking for the password on the physical console instead of the ssh dropbear connection. This is another known issue. This workaround looked promising but it broke the physical console as well as the ssh connection. No idea why.

What does work for me is to use the dropbear session to manually kill Plymouth and then enter the encryption password:

ssh -i pikey root@192.168.0.xxx

Then in the busybox session:

kill $(pidof plymouthd)
# Wait a few seconds...
echo -ne password > /lib/cryptsetup/passfifo
/scripts/local-top/cryptroot

This lets you enter the encryption passphrase. After a few seconds the normal boot process continues. So you can enter the encryption passphrase with a real keyboard if you are physically with the Pi, or you can ssh in if you are remote.