[![Stargazers repo roster for @nocodb/nocodb](https://reporoster.com/stars/nocodb/nocodb)](https://github.com/nocodb/nocodb/stargazers)
[![Stargazers repo roster for @nocodb/nocodb](https://reporoster.com/stars/nocodb/nocodb)](https://github.com/nocodb/nocodb/stargazers)
# Quick try
## NPX
You can run below command if you need an interactive configuration.
You can run the below command if you need an interactive configuration.
```
npx create-nocodb-app
@ -91,7 +90,7 @@ npm install
npm start
```
## Docker
## Docker
```bash
# for SQLite
@ -130,32 +129,40 @@ nocodb/nocodb:latest
> If you plan to input some special characters, you may need to change the character set and collation yourself when creating the database. Please check out the examples for [MySQL Docker](https://github.com/nocodb/nocodb/issues/1340#issuecomment-1049481043).
- ⚡ Access Control with Roles: Fine-grained Access Control at different levels
- ⚡ and more ...
### App Store for Workflow Automations
We provide different integrations in three main categories. See <ahref="https://docs.nocodb.com/setup-and-usages/app-store"target="_blank">App Store</a> for details.
- ⚡ Chat: Slack, Discord, Mattermost, and etc
- ⚡ Email: AWS SES, SMTP, MailerSend, and etc
- ⚡ Storage: AWS S3, Google Cloud Storage, Minio, and etc
- ⚡ Chat: Slack, Discord, Mattermost, and etc
- ⚡ Email: AWS SES, SMTP, MailerSend, and etc
- ⚡ Storage: AWS S3, Google Cloud Storage, Minio, and etc
### Programmatic Access
We provide the following ways to let users to invoke actions in a programmatic way. You can use a token (either JWT or Social Auth) to sign your requests for authorization to NocoDB.
We provide the following ways to let users programmatically invoke actions. You can use a token (either JWT or Social Auth) to sign your requests for authorization to NocoDB.
- ⚡ REST APIs
- ⚡ NocoDB SDK
### Sync Schema
We allow you to sync schema changes if you have made changes outside NocoDB GUI. However, it has to be noted then you will have to bring your own schema migrations for moving from environment to others. See <ahref="https://docs.nocodb.com/setup-and-usages/sync-schema/"target="_blank">Sync Schema</a> for details.
We allow you to sync schema changes if you have made changes outside NocoDB GUI. However, it has to be noted then you will have to bring your own schema migrations for moving from one environment to another. See <ahref="https://docs.nocodb.com/setup-and-usages/sync-schema/"target="_blank">Sync Schema</a> for details.
### Audit
### Audit
We are keeping all the user operation logs under one place. See <ahref="https://docs.nocodb.com/setup-and-usages/audit"target="_blank">Audit</a> for details.
We are keeping all the user operation logs in one place. See <ahref="https://docs.nocodb.com/setup-and-usages/audit"target="_blank">Audit</a> for details.
# Production Setup
# Production Setup
By default, SQLite is used for storing metadata. However, you can specify your own database. The connection params for this database can be specified in `NC_DB` environment variable. Moreover, we also provide the below environment variables for configuration.
By default, SQLite is used for storing metadata. However, you can specify your database. The connection parameters for this database can be specified in `NC_DB` environment variable. Moreover, we also provide the below environment variables for configuration.
## Environment variables
## Environment variables
Please refer to [Environment variables](https://docs.nocodb.com/getting-started/environment-variables)
Please refer to the [Environment variables](https://docs.nocodb.com/getting-started/environment-variables)
# Development Setup
# Development Setup
Please refer to [Development Setup](https://docs.nocodb.com/engineering/development-setup)
@ -278,12 +285,15 @@ Please refer to [Development Setup](https://docs.nocodb.com/engineering/developm
Please refer to [Contribution Guide](https://github.com/nocodb/nocodb/blob/master/.github/CONTRIBUTING.md).
# Why are we building this?
Most internet businesses equip themselves with either spreadsheet or a database to solve their business needs. Spreadsheets are used by a Billion+ humans collaboratively every single day. However, we are way off working at similar speeds on databases which are way more powerful tools when it comes to computing. Attempts to solve this with SaaS offerings has meant horrible access controls, vendor lockin, data lockin, abrupt price changes & most importantly a glass ceiling on what's possible in future.
Most internet businesses equip themselves with either spreadsheet or a database to solve their business needs. Spreadsheets are used by Billion+ humans collaboratively every single day. However, we are way off working at similar speeds on databases which are way more powerful tools when it comes to computing. Attempts to solve this with SaaS offerings have meant horrible access controls, vendor lock-in, data lock-in, abrupt price changes & most importantly a glass ceiling on what's possible in the future.
# Our Mission
Our mission is to provide the most powerful no-code interface for databases which is open source to every single internet business in the world. This would not only democratise access to a powerful computing tool but also bring forth a billion+ people who will have radical tinkering-and-building abilities on internet.
# License
Our mission is to provide the most powerful no-code interface for databases that is open source to every single internet business in the world. This would not only democratise access to a powerful computing tool but also bring forth a billion+ people who will have radical tinkering-and-building abilities on the internet.
# License
<p>
This project is licensed under <ahref="./LICENSE">AGPLv3</a>.
</p>
@ -294,4 +304,4 @@ Thank you for your contributions! We appreciate all the contributions from the c
| NC_DISABLE_CACHE | To be used only while debugging. On setting this to `true` - meta data be fetched from db instead of redis/cache. | `false` | |
| NC_BASEURL_INTERNAL | Used as base url for internal(server) API calls | Default value in docker will be `http://localhost:$PORT` and in all other case it's populated from request object | |
| AWS_ACCESS_KEY_ID | For Litestream - S3 access key id | If Litestream is configured and `NC_DB` is not present. SQLite gets backed up to S3 | |
| AWS_SECRET_ACCESS_KEY | For Litestream - S3 secret access key | If Litestream is configured and `NC_DB` is not present. SQLite gets backed up to S3 | |
| AWS_BUCKET | For Litestream - S3 bucket | If Litestream is configured and `NC_DB` is not present. SQLite gets backed up to S3 | |
// TODO fix record mapping (this causes every record to map first option, we can't handle them using data api as they don't provide option id within data we might instead get the correct mapping from schema file )
// TODO fix record mapping (this causes every record to map first option,
// we can't handle them using data api as they don't provide option id
// within data we might instead get the correct mapping from schema file )
letdupNo=1;
constdefaultName=(valueasany).name;
while(
@ -562,13 +571,6 @@ export default async (
continue;
}
// populate cdf (column default value) if configured