Se rendre au contenu

Technical and organisational security measures

Measures of pseudonymisation and encryption of personal data

The Brainframe services do not apply additional pseudonymisation of its personal data and relies on the encryption principles described below.

All data is encrypted at rest 

  • Our database servers use highly redundant AWS EFS file system as "local storage" that ensure encryption in transit and at rest with industry standard Advanced Encryption Standard AES-256-GCM in Galois counter mode (GCM) with 256-bit keys.  Data encrypted under AES-256-GCM is protected now and in the future. Cryptographers consider this algorithm to be quantum resistant.   
  • For backups we use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) for encryption where each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a root key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256)
  • Authentication data in AWS Cognito is encrypted at rest and in transit 

Measures for ensuring ongoing confidentiality, integrity, availability and resilience of processing systems and services

  • To ensure confidentiality of data hosted on our service, we use AWS Cognito for authentication and Brainframe namespace authorization. This means that only authenticated users that have been granted access to a workspace will be able to communicate with our APIs using signed JWT tokens  to access the authorized workspaces. 
  • On workspace level, the workspace administrator can also configure more granular permissions to specific parts of the namespace. (e.g. only specific folders).  
  • Brainframe users can optionally enable Two factor authentication using a 6 digit OTP (e.g. Google authenticator) to give an even higher level of authentication.
  • To ensure integrity of your data, we use industry best practice protocols to ensure data in transit is not changed. All communication with our databases is executed in away that auto corrects itself if tampering in the middle has been detected. 
  • Our database engine uses ACID (Atomicity, Consistency, Isolation and Durability) operations on row level to ensure integrity.
  • To ensure availability the storage of our data, we uses highly redundant AWS EFS file system which automatically stores the data in multiple availability zones at the same time, ensuring an outage of one complete availability zone will not impact the storage. This data uses underlaying AWS S3 storage objects which offer a 99.999999999% (11 9’s) durability over a given year. 
  • We also use highly scalable AWS Lambda infrastructure for a high resilience of our processing which automatically scales over multiple availability zones, ensuring a complete outage in one availability zone does not impact our service.

Measures for ensuring the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident

  • Next to the high availability of our data described in the previous topic, we take hourly full backups of each workspace allowing us to do a complete restore of the data of a specific workspace for a given date/hour if this ever would be required. 
  • All our cloud infrastructure is defined as "Infrastructure as code"  allowing us to fully replicate the system in case this ever would be required.
  • Our application servers that do not run on AWS Lambda, use security hardened and Dockerized images that ensure consistency, scalability and quick restorability of our services

Processes for regularly testing, assessing and evaluating the effectiveness of technical and organizational  measures in order to ensure the security of the processing

  • The full restore of workspace database backups is tested on regular basis to ensure they are available and functional for restore in case of disaster recovery. 
  • We have automatic detection of failed backups scripts in place to notify the infrastructure team so they can take action. 
  • Our applications are constantly monitored allowing the infrastructure teams to react when the service performance is impacted. 

Measures for user authentication  and authorization

  • We use AWS Cognito for authentication and Brainframe namespace authorization. This means that only authenticated users that have been granted access to a workspace will be able to communicate with our APIs using signed JWT tokens to access the authorized workspaces. 
  • You passwords are stored using a one-way hash, that allows the same users to authenticate, without storing the actual credentials
  • Brainframe staff does not have access to your password, and cannot retrieve it for you. The only option if you lost it is to reset it.
  • Brainframe users can optionally enable Two factor authentication (2FA) using a 6 digit Time based - one time passwords or TOTP (e.g. Google authenticator) to give an even higher level of authentication.
  • Next to this on workspace level, the administrator can also configure granular permissions to specific parts of the namespace. (e.g. only specific folders). 
  • For all approval workflows we require our End Users to provide their 2FA
  • External contacts (e.g. used in Brainframe distributions, forms, ...) use unique authentications via time limited tokens that are sent to the contact's email address. These users do not require username and password logins. This choice is made to facilitate the use of the platform by external infrequent users and we consider access to the mailbox to have its own user identification and authorization in place giving sufficient certainty on the identity behind the email address.

Measures for the protection of data during transmission

  • All communications with our cloud services use the TLSv1.2_2019 policy which sets the minimum negotiated Transport Layer Security (TLS) version to 1.2 and supports only the following cyphers to encrypt the data in transit
    • TLS_AES_128_GCM_SHA256
    • TLS_AES_256_GCM_SHA384
    • TLS_CHACHA20_POLY1305_SHA256
    • ECDHE-RSA-AES128-GCM-SHA256
    • ECDHE-RSA-AES128-SHA256
    • ECDHE-RSA-AES256-GCM-SHA384
    • ECDHE-RSA-CHACHA20-POLY1305
    • ECDHE-RSA-AES256-SHA384
  • All internal data communications with our servers are also protected with state-of-the-art encryption (SSH).
  • Our servers are kept under a strict security watch, and always patched against the latest SSL vulnerabilities, enjoying "Grade A" SSL ratings at all times. (www.brainframe.com and my.brainframe.com )
  • All our SSL certificates use robust 2048-bit modulus with full SHA-2 certificates chains.

Measures for the protection of data during storage

  • Our database servers use AWS EFS encrypted storage which automatically stores the data in multiple availability zones at the same time, ensuring an outage of one complete availability zone will not impact the storage. This data uses underlaying AWS S3 storage objects which offer a 99.999999999% (11 9’s) durability over a given year. 
  •  Customer data is stored in a dedicated database - no sharing of data between clients.

Measures for ensuring physical security of locations at which personal data are processed

  • We only use AWS Cloud services for storing our data, which provide a high level of physical security and access control to ensure only authorized staff is allowed to physically access the data centers. 
  • The physical security of our cloud provider is regularly evaluated as part of their multiple certifications (ISO27001, SOC 1, SOC 2, SOC 3, CSA, ISO27017, ISO27701, ISO27018, PCI DSS Level 1, PCI (https://aws.amazon.com/compliance/programs/)

Measures for ensuring events logging

  • We collect access logs for our website and our API usage to be able to conduct security investigations and detect application performance issues.
  • Every document change inside the Brainframe service generates an audit trail, and allows the user to see previous versions of the data

Measures for ensuring system configuration, including default configuration

  • We apply best practice security and system configurations as part of our change management to protect our services
  • All our cloud infrastructure is defined as "Infrastructure as code"  allowing us to fully replicate the system in case this ever would be required.
  • Our source code, which includes the infrastructure as a code, is under strict access control, and only senior team leads are allowed to deploy changes to production. This gives us full traceability of changes to our application source code and infrastructure and the quality of the data processing behind .
  • Our application servers that do not run on AWS Lambda, use security hardened and Dockerized images that ensure consistency, scalability and quick restorability of our services. This means that each new deployment (multiple per week) fully replaces the operating system and configuration of our production systems with the ones defined in our source code repository, giving us high insurance on the consistency of the configuration.

Measures for internal IT and IT security governance and management

  • We use our own Brainframe Service for ISMS and GRC documentation
  • We have a continuous integration and deployment pipeline that includes automated testing of our application's key functionalities to ensure a high level of quality of our services
  • Brainframe staff will never access your data unless it has explicitly been requested for via a tracible ticket/email, and then this permission is limited for the duration required to process the ticket

Measures for certification/assurance of processes and products

  • We plan to get ISO27001 certified, but already implement an internal ISMS aligned with the requirements of the ISO27001 standard

Measures for ensuring data minimization

  • Access logs are automatically removed after 30 days
  • Other than standard End User information (Name, email) we do not actively ask for more personal data, but End Users can potentially put any personal data into our systems which is outside of our data minimization control

Measures for ensuring data quality

  • We have a continuous integration and deployment pipeline that includes automated testing of our application's key functionalities to ensure a high level of quality of our services
  • Our database engine uses ACID (Atomicity, Consistency, Isolation and Durability) operations on row level to ensure integrity.
  • Our source code, which includes the infrastructure as a code, is under strict access control, and only senior team leads are allowed to deploy changes to production. This gives us full traceability of changes to our application source code and infrastructure and the quality of the data processing behind.

Measures for ensuring limited data retention

  • All data is stored for the duration of the contract, and is fully removed by the latest one month after the end of the contract
  • Access logs to our website and use of API's is automatically removed after 30 days to limit data storage

Measures for ensuring accountability

  • All employees and consultants work under NDA
  • Our source code, which includes the infrastructure as a code, is under strict access control, and only senior team leads are allowed to deploy changes to production. This gives us full traceability of changes to our application source code and infrastructure and the quality of the data processing behind.

Measures for allowing data portability and ensuring erasure

  • At the end of the contract the Controller can request an export of its data in the form of RDF triplet data
  • All data is stored for the duration of the contract, and is fully removed by the latest one month after the end of the contract

Credit card safety

  • We never store credit card information on our own systems. 
  • Your credit card information is always transmitted securely directly between you and our PCI-Compliant payment acquirers)

Network defence

  • Our cloud provider has very large network capacities, and has designed their infrastructure to withstand the largest Distributed Denial of Service (DDoS) attacks. Their automatic and manual mitigation systems can detect and divert attack traffic at the edge of their multi-continental networks, before it gets the chance to disrupt service availability.
  • Firewalls and intrusion prevention systems on Brainframe's servers help detect and block threats

Responsible disclosure

  • We maintain a public responsible disclosure policy  that rewards security researchers for confidentially reporting vulnerabilities. This approach allows us to address potential issues swiftly and effectively, safeguarding our systems and users before they can be exploited by malicious actors.


Subcontractors

 

Amazon Web Services, Inc

Subjects – End Users, Website visitors
Type of data – Full Name, Email, and any personal data uploaded/created by the customer, IP addresses (not personally identifiable by Amazonwithout access to other sources of data linked to IP, e.g. ISP PII data)

Purpose – To provide the application infrastructure of the Brainframe Service

Duration – For the duration of the contract

Location – Data is only stored in EU (eu-central-1 and eu-west-1) with assurance to adhere to CISPE Data Protection Code of conduct and are also certified under the EU-US Data privacy framework (DPF)

Security – ISO27001, SOC 1, SOC 2, SOC 3, CSA, ISO27017, ISO27701, ISO27018, PCI DSS Level 1, PCI (https://aws.amazon.com/compliance/programs/)


Cloudflare, Inc

Subjects – Prospects, Customers, End Users

Type of data – IP addresses (not personally identifiable by Cloudflare without access to other sources of data linked to IP, e.g. ISP PII data)

Purpose – DNS and Security monitoring solution to provide a stable and secure service

Duration – For the duration of the contract

Location – Data is only stored in EU (using  Cloudflare Data Localization Suite ensuring processing in ISO27001 certified EU region data centers only, and they are also certified under the EU-US Data privacy framework (DPF))

Security – ISO27001, ISO27701, ISO27018, SOC2, BSI Qualification (https://www.cloudflare.com/en-gb/trust-hub/compliance-resources/)


Datadog, Inc

Subjects – End Users, Website visitors

Type of data – IP addresses (not personally identifiable by Datadog without access to other sources of data linked to IP, e.g. ISP PII data)

Purpose – Application, security and infrastructure monitoring solution required to provide a stable and secure service

Duration – Personal data is stored for maximum 2 weeks and then automatically removed

Location – Data is only stored in EU (Certified under the EU-US Data privacy framework (DPF))

Certifications – ISO27001, SOC 2 (https://www.datadoghq.com/security/?tab=compliance)


Odoo, SA

Subjects – Prospects, Customers, End Users

Type of data – Full Name, Phone, Email, Bank details

Purpose – ERP to manage Support, Product communications, Marketing, CRM, sales and payment data of the Controller

Duration – For the duration of the contract

Location – Data is only stored in EU

Security – CSA STAR Level 1 (https://www.odoo.com/security)


Calendly, LLC

Subjects – Prospects, Customers, End Users

Type of data – Full name, Phone, Email

Purpose – Facilitate finding a free slot in the calendar to set up an appointment

Duration – For the duration of the contract

Location – Data is stored in the US but they are certified under the EU-US Data privacy framework (DPF)

Security – ISO27001, SOC 2, PCI (https://calendly.com/security)



Changelog

13/07/2022

  • Page publication

16/11/2023

  • Additional details related to privacy assurances in line with data privacy framework and clarification that IP address can only be considered as PII by the subcontractors if they combine it with other source PII (e.g ISP data which is typically not possible)
  • Update to ambition for ISO27001 certification
  • Changed access log retention date from 7 days to 30 days to follow industry standards

11/11/2024