Monday 22 March 2021

Teardown of Ubiquiti Unifi Ethernet Surge Protector ETH-SP-G2

 I've recently been running some outdoor ethernet cables for an extra wifi access point and some IP cameras, so thought it was time to invest in some ethernet surge protectors.


Some quick research showed a fair choice of devices, ranging from £4 devices on eBay direct from China up to the APC PNET1GB ProtectNet at around £30 (and then they start getting really expensive - Farnell list many in the £100-£200 range).

The Unifi model sits somewhere in the middle (I managed to get them for about £10 each - about 14 USD at time of writing) so seemed a decent bet:

I couldn't find a teardown online though so was left curious as to exactly they do. They come apart relatively easily - the metal part just slides out, the difficultly is leveraging it out without damaging the plastic / thin metal and look like this inside:

Inside of ETH-SP-G2 Ethernet Surge Protector

The 8 round devices appear to be labelled "2R 90 19". I can't find that part number listed anywhere online but they are presumably gas discharge tube arresters. Each has one side connected to one of the RJ45 pins and the other side connected to the ground connection.

As with any kind of surge protection, the goal is to provide an easy path for the surge to follow so it doesn't get anywhere near the equipment you want to protect - so these seem to do what they say on the tin.

It's important to install them as per the instructions - noting in particular the requirement to provide an earth connection. This is done either via the included self-tapper if you're mounting earthed metal pole or with a flying earth lead with a tag on (attacks with an M5 nut/bolt, not supplied). I've seen a few pictures posted on forums of people's installs where there is no obvious earth connection to the surge protector, which isn't likely to work well - absolutely best case it'll do something if you're using shielded cable and the shield is earthed somewhere, worse case it'll be do nothing at all as there's nowhere for it to redirect the surge to.

It would generally be good practice to use shielded cables/connectors and to fit surge protectors at both ends of the cable - but as long as you fit the surge protector soon after the cable enters the building and have earthed it properly, it'll be giving some protection.

Wednesday 10 March 2021

Collection of app2app articles/presentations

This is a list of links to the various articles/presentations on app2app I've done or been involved with:




2020 Presentation at OAuth Security Workshop (the first approximately 10 minutes overlaps with the Identiverse presentation, the remainder cover different areas)




And some example videos showing the flow:




I also run training courses on app2app that go into more detail on the implementation, best practices, common patterns (and anti-patterns) and potential problems - an initial 90 minute training session plus workshops as necessary, particularly useful for banks that plan to implement app2app in their apps/authorization servers. Please drop me an email if you're interested.

Monday 7 September 2020

Ubuntu Server 20.04: Intel NUCs / linux / e1000e driver, checksum offloading and 'Detected Hardware Unit Hang'

On a project I've been working on we recently replaced various test servers with Intel 10th gen NUCs - or to be exact, the BXNUC10I7FNH1 - though for the moral of this story appears to apply to many Intel NUCs and in fact also a wide range of motherboards that use Intel ethernet controllers.

The servers run linux, Ubuntu Server 20.04 - though this problem persists across a wide range of linux distributions and versions.

Shortly after we migrated onto the new servers, we discovered weird networking issues - sometimes the database backup (which copies to a remote host over ssh) would fail with 'Received disconnect from 192.168.1.64: 2: Packet corrupt'.

The test scripts running where sometimes behaving oddly too - they load files over NFS, and sometimes those files would load with one block missing, or would fail to load, or would load but would take 30 seconds or more longer than usual.

Investigation revealed messages like this in /var/log/kern.log on the host machine:

Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   TDH                  <58>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   TDT                  <6b>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   next_to_use          <6b>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   next_to_clean        <57>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] buffer_info[next_to_clean]:
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   time_stamp           <113de7945>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   next_to_watch        <58>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   jiffies              <113de8248>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574]   next_to_watch.status <0>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] MAC Status             <40080083>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] PHY Status             <796d>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] PHY 1000BASE-T Status  <3c00>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] PHY Extended Status    <3000>
Sep  5 05:10:32 atsnuc1 kernel: [1333685.771574] PCI Status             <10>
Sep  5 05:10:33 atsnuc1 kernel: [1333686.763389] e1000e 0000:00:1f.6 eno1: Reset adapter unexpectedly
Sep  5 05:10:38 atsnuc1 kernel: [1333692.507952] e1000e: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

which coincided with the time the problems occurred.

Further investigation showed this is far from an isolated problem - a quick google revealed thousands of posts about similar issues, dating back over about 10 years, often culminating in a bug report that (if you were lucky) referenced a particular fix for a particular chipset - some of which were allegedly included in kernels newer than the 5.4 one Ubuntu server 20.04 ships with. (For completeness, I'm sure many of these problems were caused by faulty hardware like bad cables, but from the sheer number it's also clear than many weren't).

We tried a range of kernel versions, right up to the very bleeding edge 5.6.x, without any change in behaviour. We tried changing out various pieces of hardware. Nothing helped.

The eventual conclusion of many of these posts is that a workaround is to disable the offloading of checksums to the network hardware - the problem is fairly well explained in a blog post by Michael Mulqueen.

I've been unable to figure out if this is a hardware or software problem - but if it's a hardware problem, it's a bug that affects a number of entire chipset lines.

The workaround that seems to work for most people is to disable the basic offload function by running:

ethtool -K eno1 tx off rx off

(replacing 'eno1' with the relevant interface name)

This is lost on reboot; to apply it automatically on each boot (on Ubuntu Server 20.04), create an executable file /etc/networkd-dispatcher/routable.d/10-disable-offloading with contents:

#!/bin/sh

# disable TCP offload
logger $0 -- "Running: ethtool -K eno1 tx off rx off"
logger $0 -- `ethtool -K eno1 tx off rx off 2>&1`
logger $0 -- done
(This actually runs more than once during boot etc, but it's a no-op if it's already run.)

So far this is solving it for me - tracking all this down wasted a whole bunch of time for my colleagues and myself; I'm frankly really quite shocked that there seems to be very little sign of Intel putting any effort into solving this problem properly. For future projects I'm going to be avoiding Intel chipset hardware.

Here's two more links with a bit more background:

https://serverfault.com/questions/421995/disable-tcp-offloading-completely-generically-and-easily

https://www.freedesktop.org/software/systemd/man/networkctl.html

Wednesday 29 April 2020

Fixing smart mailboxes showing the wrong messages / erasing spotlight indexes on macOS Catalina

macOS Catalina introduced some new changes that means many of the articles you'll find about resetting spotlight to fix smart mailboxes not working properly are wrong.

Firstly you can use this command to see the status of the indexes on all volumes:

mdutil -s -a -v

and this command will reset the indexes for your root storage and should kick off a reindex (if you check with Activity Monitor you should see mds_stores uses lots of CPU to generate the new index):

mdutil -E /System/Volumes/Data

(Most blogs suggest using the path of just /, which won't help on Catalina as that partition now only contains system files and none of the user data.)

Some people suggested the -X flag is necessary in some cases, but I'm not convinced.

Thursday 1 August 2019

Implementing app-to-app authorisation in OAuth2/OpenID Connect

What is app2app?

App2app is a mechanism that allows mobile apps performing OAuth2 or OpenID Connect based authentication to offer a much simpler faster flow if the user already has an app provided by the authorization server owner installed on their mobile device. Here's how it actually looks when I grant the moneyhub app on my iPhone access to my Nationwide current account:


It will be familiar to some in the UK - it is already in use by some challenger banks, and the largest 9 banks in the UK (the ‘CMA9’) are required to implement ‘app2app’ as a consequence of an order from the Competitions & Market’s Authority (a partially successful attempt to rectify the dysfunctional banking market we have had in the UK for a long time).

It's also noteworthy that the experience from the UK rollout of app2app is that drop off rates (where the user starts the process of giving a third party provider access to their bank account, but does not complete the process) massively dropped when app2app was used - i.e. far more users were successfully completing the process with app2app, compared to standard redirection. (This was not a surprise to most people - I'm sure I'm not alone in having no idea what the username, password or other security details are for the majority of my bank accounts.)

However, the ‘app2app’ model is not familiar to many outside the UK ecosystem, and there are few if any explanations of how it works at the protocol level - something I will attempt to address in this post.

Interestingly, relying parties have been doing 'app2web' for years in their own apps as described in IETF BCP212 - OAuth 2.0 for Native Apps - with app2app we're taking that exact same pattern that was applied to OAuth clients and instead applying it to the OAuth server.

I'm happy to explain this in more detail to anyone and to assist with decisions in this area, please drop me an email at joseph@emobix.co.uk if you'd like to know more.

How app2app works

App2app uses a standard OAuth or OpenID Connect flow. At the technical level, it looks something like this:


It makes use of the “claimed https url” feature of iOS and Android (as recommended by BCP 212 - OAuth 2.0 for Native Apps - also known as 'Deep linking', 'Universal links' on iOS or 'App Links' on Android). “Claimed urls” are a secure way for mobile apps to indicate that if a user attempts to view a url on their associated website, the app should be launched instead to provide a superior user experience. The TPP's app (acting on behalf of the OAuth client) needs to claim their registered redirect url.

For the third parties it generally requires only a handful of new lines of code if they already support the redirect flow. Normally they would just open an in-app “browser tab”, the only change is that they now need to check if there is an app on the system can open the url first, and if there is launch the app instead. On iOS using Swift this looks something like:

    // if bank's app is present & supports app2app, open it
    UIApplication.shared.open(authorizationEndpointUrl, options: [.universalLinksOnly: true]) { (success) in
    if !success {
        // launching bank app failed: app does not support universal links or
        // bank's app is not installed - open an in app browser tab instead
        <...continue as app did before app2app...>
    }

or in ObjC:

       [UIApplication.sharedApplication openURL: authorizationEndpointUrl options:@{UIApplicationOpenURLOptionUniversalLinksOnly: @YES} completionHandler:^(BOOL success) {
            if (!success) {
                // launching bank app failed... continue as before
            }
        }];


Note how this automatically drops back to the 'old' way if the bank's app is not installed. Equally the bank's app should be implemented so that it does not care whether the TPP app is an app or a website. This means all 4 combinations (app2app, web2app, app2web, web2web) will automatically work;  an app will be used if available and it will fallback to the web experience if there's no app.

For banks, their mobile apps need to claim the relevant authorisation endpoints (which can be harder than it sounds, as some banks have multiple brands or segments of their customer base that are currently hosted on the same Authorization Endpoint but have different mobile apps). This generally means that banks will also need multiple discovery endpoints and hence multiple ‘issuer’ values. (There are other possible approaches, but they unfortunately increase the chances of successful phishing attacks targeted at the third parties so cannot be recommended.)

The bank’s app then needs to authenticate the user and obtain an OAuth 2,0 ‘authorization code’ (and, in Financial-grade API compliant systems, the associated id_token used as a detached signature for that code). Generally this is done by securely generating a private key protected by the user’s biometric. The key is registered with the authorization server during an initial pairing process, and then used by the app to prove to the server that the same user has completed a biometric authentication - properly implemented this satisfies the SCA (Strong Customer Authentication) requirement in PSD2, whilst still being an very easy process for the user.

As app2app is still relatively new, in most cases it will involve the bank adding a (relatively simple) custom plugin to a new authorization flow in their authorization server (there is no standard protocol for API used between the bank’s app and the bank’s authorization server). There is no standard that defines the details how how this is done, there are generally two choices:

  1. Entirely native user experience, and the banks authorization server has an API that the native mobile app can use to complete the consent process and obtain the authorization code given the biometric proof
  2. Partially native user experience, the app collects the biometric then sends the proof as an additional parameter to the authorization endpoint url, which is then presented in a web view to complete the consent process
The exact solution used is likely to be dictated by the capabilities available in the IdP software in use on the authorization server.


Once the banks app has the code (and maybe an id_token) it will pass them back to the third party app by appending them to the client’s preregistered redirect url as usual.

The flow then proceeds as per the normal redirection flow; the third party app passes the authorization code to it’s backend, which exchanges the authorization code for an access token (etc) by calling the bank authorization server’s token endpoint and can then access the bank APIs.

App2app, web2app, app2web, web2web

It's worth saying a little more about the possible combinations, as some of them require a little care - in particular in 'web2app' (where a mobile website being viewed in the web browser on a mobile device can use a native app to authenticate) - the redirect back to the relying party needs to go to the same web browser the user started in (so that their session cookies are still there). In some cases that may require a little care, and it's worth testing all four scenarios and making sure you're aware of any limitations your implementation has.

How does this relate to CIBA?

Many people will be aware of the relatively new CIBA and FAPI-CIBA specifications from the OpenID Foundation. Here's a quick summary:

CIBA

  • Can authenticate across devices; authentication process can be triggered from a petrol pump, call centre, point of sale terminal and so on
  • Relatively new protocol; some vendor's IAM products do not yet support it
  • Requires more actions from the user

App2app

  • Same device only
  • Does not require any changes to your iDP product, it can be deployed today
  • Faster/easier for user than CIBA
  • Very easy for relying parties to use if they are already using standard OAuth2/OpenID Connect flows
When doing authentication between two apps that are on the same device, I cannot think of a use case where it would be better to use CIBA.

PSD2: Why is app2app suddenly a hot topic?

The EBA recently (29th July 2019) issued a new set of clarifications on their interpretation of PSD2, containing the statement:

This means that, ASPSPs that have implemented a redirection approach and that enable their own PSUs to authenticate via the ASPSP's mobile app when the PSU directly accesses his/her account should also support app-to- app redirect when the customer uses a TPP

What this means is that all banks that implement “redirection” (i.e. the standard OAuth2 flow where the TPP sends the user's browser to the bank's login page) for authorising third parties to a user’s bank account will now need to support ‘app-to-app’ redirection flow (so long as the bank has a mobile app). This isn't the only reason you might need to implement 'app-to-app'; for example if you avoid the 'redirection' flow in favour of a decoupled flow, you will have to convince your National Competent Authority that doing a decoupled flow between apps/websites both being consumed on the same device does not present an "impediment" - which in my opinion would be a hard argument to win. If you're not sure if it's an impediment, I suggest you draw out the UX for 3 flows:

  1. The user authenticating to your ASPSP app to view transactions (or make a transaction)
  2. The user authenticating from a TPP app to your ASPSP app using app2app for the same scenarios
  3. The user authenticating from a TPP app to your ASPSP app (on the same device) using decoupled

If '3' has significantly more steps or delays for the user than '1' and '2' then you need to consider how extra steps may negatively affect the experience of your customers when using third parties and how you will justify those extra steps your National Competent Authority.

For consumers, the EBA clarification is a very good thing - if you’re using a third party service on your mobile device (for example the numerous apps that let you view all your bank accounts in a single place) your life has just been made a lot easier. Now when you want to add a new bank account, the third party app will open the relevant banking app, and the banking app will authenticate you in the normal way (often FaceID/TouchID or other very easy biometric based flow), confirm the access, and redirect you back to the third party app. (There’s also a ‘web-to-app’ variant of this if you’re using a third party website on a mobile device - it is really no different to 'app-to-app' and if a bank supports app2app web2app should 'just work'.)

The EBA opinion also shouldn’t be a surprise to anyone: PSD2 already required banks to authorise third party access using the same mechanisms the user would normally use - and for a large number of users (myself included) mobile banking apps protected by biometrics is my primary way of accessing my bank accounts. App2app is the only sensible way I have seen of allowing a user to authenticate with a biometric in a redirection based OAuth2 flow.

More on app2app


Please see my collection of app2app articles/presentations blog for further details.


I also run training courses on app2app that go into more detail on the implementation, best practices, common patterns (and anti-patterns) and potential problems - an initial 90 minute training session plus workshops as necessary, particularly useful for banks that plan to implement app2app in their apps/authorization servers. Please drop me an email if you're interested.

Tuesday 16 July 2019

Security Conformance in the UK OpenBanking ecosystem


I've been getting a lot of questions lately about how security profile conformance testing is (and will be) working in the UK OpenBanking ecosystem, so I thought it'd be helpful to write up a bit more detail about what has and will be happening.

There are 3 certifications currently available that are relevant to the security profile. I'll expand on each one below. I wrote this blog in July 2018; as this is a fast moving space it's very possible things have changed since.

Note that this is a completely separate topic to functional conformance; ASPSPs will generally want to certify for both functional and security conformance.

Background

A reasonable amount of confusion arose because OpenBanking largely adopted the FAPI-RW specification, but in order to allow all CMA9 ASPSPs to launch a service on the 13th January 2018 a number of concessions were made in a UK OpenBanking Security Profile.

The main two areas of difference were:

  • Client authentication: Whilst FAPI requires the use of OAuth MTLS or private_key_jwt client authentication, OpenBanking's deprecated security profile chose to also allow (but not recommend) client_secret_basic and client_secret_post as interim measures - albeit also requiring that matching MTLS certificates are also presented.
  • Response Type: FAPI only allows response_type=code id_token. OpenBanking's deprecated security profile decided to also allow response_type=code "as an interim measure if not yet able to support code id_token"; the expectation would be that (due to known security flaws) any support for response_type=code would be removed shortly after an ASPSP has been able to update their software to support response_type=code id_token.

There are some other differences too, but the other changes are generally more minor.

On 23 August 2018, OBIE's Technical Design Authority (TDA) agreed a decision to switch from the Open Banking Security Profile to the Financial Grade API (FAPI) Profile.

The Open Banking Security Profile is hence essentially obsolete, and only still an option due to some legacy systems.

Available Conformance Tests


1. OpenID Connect Core tests

These test basic core opened connect behaviours. They do NOT support mutual tls of any form, and hence they cannot be run against an ASPSP production system.

ASPSPs may be able to run these tests against a development version of their system where all mutual TLS functionality has been disabled. If the version of any underlying vendor product in use already has an OpenID Connect Core certification then the value gained from running these tests again may be limited, so you should check with your vendor to see if they already have a certification.

2. OpenBanking security conformance suite


These tests should only be used if the ASPSP has not yet adopted FAPI-RW (see the 'Background' section above for more explanation about this).

As you will see written in many places, this server and certification with these tests will not be available after 14th September 2019. These tests are already deprecated and are only still alive because a few ASPSPs have not yet managed to migrate to FAPI.

3. OpenID Foundation FAPI conformance suite

The instructions can be found here.

This suite is based on the OpenBanking Security Profile conformance suite, which OpenBanking donated to the OpenID Foundation. The suite has been updated to fully test FAPI-RW implementors draft 2 (currently the latest version of the spec, and the version of FAPI OpenBanking adopted).

This is the security conformance suite that OpenBanking recommend is used. In order to use this suite, ASPSPs must align to FAPI as explained in the above 'Background' section

In due course it is hoped that this test will cover all appropriate parts of the OpenID Connect Core tests, removing the need to run the 'Core' conformance suite.

It's important to also remember that ASPSPs should aim to test all the applicable software. In most ASPSPs this will mean running security conformance against:
  1. All production deployments
  2. All sandboxes
  3. All iOS applications (that implement 'app2app')
  4. All Android applications (that implement 'app2app')

Conclusion

Most ASPSPs should be looking to run the OIDF FAPI conformance suite, as going forward this is the only option on the table that will allow them to demonstrate security conformance to the Financial Conduct Authority. This will require that the ASPSPs align to the FAPI standard and support one of the FAPI mandated client authentication methods.

I'm happy to explain this in more detail to anyone and to assist with decisions in this area, please drop me an email at joseph.heenan@fintechlabs.io if you'd like to know more.


Friday 5 January 2018

Obtaining keys to on-board/register a TPP client application to an OpenBanking bank API server

[Note that this was written in January 2018; some of the specifics about how you interact with the OpenBanking Directory have changed after the move to support eIDAS in mid-2019 and I suggest referring to OpenBanking's documentation for that part.]

Introduction

I've been doing a lot of work with the OpenBanking APIs recently (which go live for end-users on 13th January 2018).

There's a lot already written elsewhere about the whole concept of OpenBanking. To quickly sum up, the Competitions & Markets Authority insisting the that largest 9 banks in the UK publish APIs to a common standard is a really good thing and we've already seen that it's encouraging innovation.

Some of the development of the standards was not as open as some might like, and there's a lot of hidden information and new concepts that makes it difficult to actually get going on the system for someone that's not been intimately involved. The OpenBanking system is fairly unique in it's heavy use of MATLS and a custom certificate authority.

One particular pain point I ran into was registering client applications into OpenBanking's directory and then onto a particular bank's authorisation server. There is a front end guide for the directory (hopefully you have this if you have access to the directory, as I'm not entirely sure where the official place to get it from is), and there is a spec for client registrations here:

https://bitbucket.org/openid/obuk/src/4630771db004da59992fb201641f5c4ff2c881f1/uk-openbanking-registration-profile.md?at=master&fileviewer=file-view-default

The spec covers what you need to do in great detail - unfortunately it contains very little detail on how you can actually achieve it using common tools!

Creating the client on the directory

Firstly you need to create your client within the OpenBanking directory. This should be fairly straightforward (but try to get everything right and in there first time - the directory does not allow you to edit a client, say to add/correct redirect URIs - you have to create a client from scratch each time), and watch that the pages are sometimes a little slow to populate so have patience.

Creating your private keys & getting them signed

Once you've created your client on the directory,  you need to create the two private keys that OpenBanking requires (I've used unix shell style variables here; if you're on Windows you'll need to manually replace ${org} and ${client} in the commands). For the MIT (multi-industry testing) environment use:

org="my-org-id-from-open-banking-directory"
client="id-for-my-client-from-open-banking-directory

openssl req -new -newkey rsa:2048 -nodes -sha256 \
  -out transport.csr -keyout transport.key \
  -subj "/C=GB/O=Open Banking Limited/OU=${org}/CN=${client}"

openssl req -new -newkey rsa:2048 -nodes -sha256 \
  -out signing.csr -keyout signing.key \
  -subj "/C=GB/O=Open Banking Limited/OU=${org}/CN=${client}" 

Or if you are on the production environment, the O= needs to be 'OpenBanking':

openssl req -new -newkey rsa:2048 -sha256 -nodes \
  -out transport.csr -keyout transport.key \
  -subj "/C=GB/O=OpenBanking/OU=${org}/CN=${client}"
openssl req -new -newkey rsa:2048 -sha256 -nodes \
  -out signing.csr -keyout signing.key \
  -subj "/C=GB/O=OpenBanking/OU=${org}/CN=${client}"

Do not put any other/extra values into the DN (the '-subj' parameter).

Note the importance of -sha256 (which may be missing in the documentation from OpenBanking) - older OpenSSL installs will default to sha1 which is not acceptable. You can check if you have a sha1 or sha256 csr by running:

openssl req -in transport.csr -text -noout | grep 'Signatu'

This should output:

    Signature Algorithm: sha256WithRSAEncryption

These csr files can then be uploaded to the OpenBanking directory, which will give you two .pem files (download these alongside your private keys, renaming them to transport.pem and signing.pem) and also a Software Statement Assertion.

You will also need to obtain Open Banking's root and issuing certificate authority as .cer files. (I'm currently not sure what is the official route for obtaining these.)

You can check the contents of the files using openssl, eg:

openssl req -in transport.csr -text -noout
openssl rsa -in transport.key -text -noout
openssl x509 -in transport.pem -text -noout

Folding the OpenBanking PEM files

Some software (including openssl when generating the pkcs12 necessary for firefox) considers the pem files provided by the OpenBanking directory to be invalid as they have very long lines, you can fix this with:

fold -w64 transport.pem > transport-fixed.pem

Using APIs in Firefox

If you want to do any testing in firefox, in particular using it's handy HTTP requester add-on (note that it currently only works in pre-quantum builds of firefox so you may need to download an pre-Quantum version), then you'll need to convert your transport key to a pfx file:

openssl pkcs12 -export -out transport.pfx -inkey transport.key -in transport-fixed.pem

This pfx file and the OpenBanking .cer files can then be imported into Firefox in Preferences -> Advanced -> Certificates -> "Your Certificates" and "Authorities" respectively.

Using APIs in curl

curl is my preferred tool for trying out APIs requests before coding them up. To use curl with a bank API server, you'll need to combine the two openbanking keys together:

cat obrootca.cer obissuingca.cer  >> obchain.cer

and also combining your private transport key with the signed certificate you got back from the OpenBanking directory:

cat transport.key transport.pem >> transport-combined.pem 

These certifcate/keys can then be used with curl using:

curl --cacert obchain.cer \
  --cert-type pem --cert transport-combined.pem

Note that if you make an error in supplying the client certificate or issuing chain, you may well discover that the bank's server just resets the TCP connection instead of returning a useful error - variants of this error message haunted me for longer than I would have liked:

curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
(at least on linux, errno 104 is ECONNRESET or 'Connection reset by peer')

Just double check you are supplying a valid client certificate and have supplied the correct OpenBanking .cer files and you should get past this.

Creating a JWT for a bank's dynamic client registration endpoint

You will also need to produce a signed registration request (if the bank you're onboarding with allows or only has a dynamic client registration endpoint). To actually produce the signed request you can use:

https://jwt.io

You can add your signing private key in there, after selecting 'RS256' (the private key never leaves your browser, but if this is real production key you probably want to use a completely offline tool instead).

This site is invaluable for verifying if your JWT is signed correctly:

https://jwt.davetonge.co.uk

for the 'jwks endpoint' field you should enter your jwks URL found on your OpenBanking Directory page (it can also be found in your SSA if you use jwt.io to decode it).

The main gotcha here is to make sure your SSA is current (some banks require it to be less than 3 days old) and that the iss/exp fields are current - there's a handy online converter for the time stamps.

Generating a JWKS for the signing key

Lastly, you may find you need to convert your signing key into JSON Web Key Set format (JWKS for short). The npm pem-jwk module should be able to do this conversion, install it using:

  sudo npm install -g pem-jwk

An extra issue is that currently the npm RSA key reading module only copes with the older PKCS1 (traditional OpenSSL) key format, so you may well see a 'Could not parse PEM' or 'Could not read file' error - whereas current versions of SSL generate keys in the PKCS8 format - you can convert your key using:

openssl rsa -in signing.key -out signing-PKCS1.key

(thanks to this stack overflow post for a detailed explanation of the key formats!)

Then you can do the conversion to jwks:

 pem-jwk ~/path-to-my-keys/signing-PKCS1.key > signing.jwks

which will display the jwks on the console. You'll need to manually add the "kid": "<keyidfrom OB directory>" and "alg": "RS256" lines.

TLS Keys as JSON strings

You may also need the TLS transport keys as JSON strings; these are simply the openssl versions with the header/footer and all newlines removed, like so:

perl -pe '$_="" if /----/; s/\n//' transport.key > transport-key.json

perl -pe '$_="" if /----/; s/\n//' transport.pem > transport-cert.json

Wrapping up

Hopefully that should give you a good start. If you need additional help in this area, my company does consulting work - drop me an email at joseph@emobix.co.uk.