Using Databases
This guide provides instructions on how to use databases with CAP applications. Out of the box-support is provided for SAP HANA, SQLite, H2 (Java only), and PostgreSQL.
This guide is available for Node.js and Java.
Use the toggle in the title bar or press v to switch.
Setup & Configuration
Adding Database Packages
Following are cds-plugin packages for CAP Node.js runtime that support respective databases:
Database | Package | Remarks |
---|---|---|
SAP HANA Cloud | @cap-js/hana | recommended for production |
SQLite | @cap-js/sqlite | recommended for development |
PostgreSQL | @cap-js/postgres | maintained by community + CAP team |
Follow the links above to find specific information for each.
In general, all you need to do is to install one of the database packages, as follows:
Using SQLite for development:
npm add @cap-js/sqlite -D
Using SAP HANA for production:
npm add @cap-js/hana
Prefer cds add hana
...
... which also does the equivalent of npm add @cap-js/hana
but in addition cares for updating mta.yaml
and other deployment resources as documented in the deployment guide.
Auto-Wired Configuration
The afore-mentioned packages use cds-plugin
techniques to automatically configure the primary database with cds.env
. For example, if you added SQLite and SAP HANA, this effectively results in this auto-wired configuration:
{"cds":{
"requires": {
"db": {
"[development]": { "kind": "sqlite", "impl": "@cap-js/sqlite", "credentials": { "url": "memory" } },
"[production]": { "kind": "hana", "impl": "@cap-js/hana", "deploy-format": "hdbtable" }
}
}
}}
In contrast to pre-CDS 7 setups this means...
Custom Configuration
The auto-wired configuration uses configuration presets, which are automatically enabled via cds-plugin
techniques. You can always use the basic configuration and override individual properties to create a different setup:
Install a database driver package, for example:
shnpm add @cap-js/sqlite
Add option
-D
if you want this for development only.Configure the primary database as a required service through
cds.requires.db
, for example:json{"cds":{ "requires": { "db": { "kind": "sqlite", "impl": "@cap-js/sqlite", "credentials": { "url": "db.sqlite" } } } }}
The config options are as follows:
kind
— a name of a preset, likesql
,sqlite
,postgres
, orhana
impl
— the module name of a CAP database service implementationcredentials
— an object with db-specific configurations, most commonlyurl
Don't configure credentials
Credentials like username
and password
should not be added here but provided through service bindings, for example, via cds bind
.
Use cds env
to inspect effective configuration
For example, running this command:
cds env cds.requires.db
→ prints:
{
kind: 'sqlite',
impl: '@cap-js/sqlite',
credentials: { url: 'db.sqlite' }
}
Built-in Database Support
CAP Java has built-in support for different SQL-based databases via JDBC. This section describes the different databases and any differences between them with respect to CAP features. There's out of the box support for SAP HANA with CAP currently as well as H2 and SQLite. However, it's important to note that H2 and SQLite aren't enterprise grade databases and are recommended for non-productive use like local development or CI tests only. PostgreSQL is supported in addition, but has various limitations in comparison to SAP HANA, most notably in the area of schema evolution.
Database support is enabled by adding a Maven dependency to the JDBC driver, as shown in the following table:
Database | JDBC Driver | Remarks |
---|---|---|
SAP HANA Cloud | com.sap.cloud.db.jdbc:ngdbc | Recommended for productive use |
H2 | com.h2database:h2 | Recommended for development and CI |
SQLite | org.xerial:sqlite-jdbc | Supported for development and CI Recommended for local MTX |
PostgreSQL | org.postgresql:postgresql | Supported for productive use |
Learn more about supported databases in CAP Java and their configuration
Providing Initial Data
You can use CSV files to fill your database with initial data - see Location of CSV Files.
For example, in our cap/samples/bookshop application, we do so for Books, Authors, and Genres as follows:
bookshop/
├─ db/
│ ├─ data/ # place your .csv files here
│ │ ├─ sap.capire.bookshop-Authors.csv
│ │ ├─ sap.capire.bookshop-Books.csv
│ │ ├─ sap.capire.bookshop-Books.texts.csv
│ │ └─ sap.capire.bookshop-Genres.csv
│ └─ schema.cds
└─ ...
For example, in our CAP Samples for Java application, we do so for some entities such as Books, Authors, and Genres as follows:
db/
├─ data/ # place your .csv files here
│ ├─ my.bookshop-Authors.csv
│ ├─ my.bookshop-Books.csv
│ ├─ my.bookshop-Books.texts.csv
│ ├─ my.bookshop-Genres.csv
│ └─ ...
└─ index.cds
The filenames are expected to match fully qualified names of respective entity definitions in your CDS models, optionally using a dash -
instead of a dot .
for cosmetic reasons.
Using .csv
Files
The content of these files is standard CSV content with the column titles corresponding to declared element names, like for Books
:
ID,title,author_ID,stock
201,Wuthering Heights,101,12
207,Jane Eyre,107,11
251,The Raven,150,333
252,Eleonora,150,555
271,Catweazle,170,22
Note:
author_ID
is the generated foreign key for the managed Associationauthor
→ learn more about that in the Generating SQL DDL section.
If your content contains ...
- commas or line breaks → enclose it in double quotes
"..."
- double quotes → escape them with doubled double quotes:
""...""
ID,title,descr
252,Eleonora,"""Eleonora"" is a short story by Edgar Allan Poe, first published in 1842 in Philadelphia in the literary annual The Gift."
DANGER
On SAP HANA, only use CSV files for configuration data that can't be changed by application users. → See CSV data gets overridden in the SAP HANA guide for details.
Use cds add data
Run this to generate an initial set of empty .csv
files with header lines based on your CDS model:
cds add data
Location of CSV Files
CSV files can be found in the folders db/data and test/data, as well as in any data folder next to your CDS model files. When you use cds watch
or cds deploy
, CSV files are loaded by default from test/data. However, when preparing for production deployments using cds build
, CSV files from test/data are not loaded.
Adding initial data next to your data model
The content of these 'co-located' .cds
files actually doesn't matter, but they need to be included in your data model, through a using
clause in another file for example.
If you need to use certain CSV files exclusively for your production deployments, but not for tests, you can achieve this by including them in a separate data folder, for example, db/hana/data. Create an index.cds file in the hana folder as outlined earlier. Then, set up this model location in a dummy cds service, for example hanaDataSrv, using the [production]
profile.
"cds": {
"requires": {
"[production]": {
"hanaDataSrv ": { "model": "hana" }
}
}
}
As a consequence, when you run cds build -–production
the model folder hana is added, but it's not added when you run cds deploy
or cds watch
because the development profile is used by default. You can verify this by checking the cds build logs for the hana build task. Of course, this mechanism can also be used for PostgreSQL database deployments.
On SAP HANA ...
CSV and hdbtabledata files found in the src folder of your database module are treated as native SAP HANA artifacts and deployed as they are. This approach offers the advantage of customizing the hdbtabledata files if needed, such as adding a custom include_filter
setting to mix initial and customer data in one table. However, the downside is that you must redundantly maintain them to keep them in sync with your CSV files.
Quite frequently you need to distinguish between sample data and real initial data. CAP supports this by allowing you to provide initial data in two places:
Location | Deployed... | Purpose |
---|---|---|
db/data | always | initial data for configurations, code lists, and similar |
test/data | if not in production | sample data for tests and demos |
Use the properties cds.dataSource.csv.* to configure the location of the CSV files. You can configure different sets of CSV files in different Spring profiles. This configuration reads CSV data from test/data
if the profile test
is active:
---
spring:
config.activate.on-profile: test
cds:
dataSource.csv.paths:
- test/data/**
Querying at Runtime
Most queries to databases are constructed and executed from generic event handlers of CRUD requests, so quite frequently there's nothing to do. The following is for the remaining cases where you have to provide custom logic, and as part of it execute database queries.
DB-Agnostic Queries
At runtime, we usually construct and execute queries using cds.ql APIs in a database-agnostic way. For example, queries like this are supported for all databases:
SELECT.from (Authors, a => {
a.ID, a.name, a.books (b => {
b.ID, b.title
})
})
.where ({name:{like:'A%'}})
.orderBy ('name')
At runtime, we usually construct queries using the CQL Query Builder API in a database-agnostic way. For example, queries like this are supported for all databases:
Select.from(AUTHOR)
.columns(a -> a.id(), a -> a.name(),
a -> a.books().expand(b -> b.id(), b.title()))
.where(a -> a.name().startWith("A"))
.orderBy(a -> a.name());
Standard Operators
The database services guarantee identical behavior of these operators:
==
,=
— with=
null being translated tois null
!=
,<>
— with!=
translated toIS NOT
in SQLite, or toIS DISTINCT FROM
in standard SQL, or to an equivalent polyfill in SAP HANA<
,>
,<=
,>=
,IN
,LIKE
— are supported as is in standard SQL
In particular, the translation of !=
to IS NOT
in SQLite — or to IS DISTINCT FROM
in standard SQL, or to an equivalent polyfill in SAP HANA — greatly improves the portability of your code.
Runtime Only
The operator mappings are available for runtime queries only, but not in CDS files.
Functions Mappings for Runtime Queries
A specified set of standard functions is supported in a database-agnostic, hence portable way, and translated to database-specific variants or polyfills. Note that these functions are only supported within runtime queries, but not in CDS files. This set of functions are by large the same as specified in OData:
concat(x,y,...)
— concatenates the given strings or numberstrim(x)
— removes leading and trailing whitespacescontains(x,y)
— checks whethery
is contained inx
, may be fuzzystartswith(x,y)
— checks whethery
starts withx
endswith(x,y)
— checks whethery
ends withx
matchespattern(x,y)
— checks whetherx
matches regexy
substring(x,i,n?)
1 — extracts a substring fromx
starting ati
(may be negative) with lengthn
(optional; may be negative)indexof(x,y)
1 — returns the index of the first occurrence ofy
inx
length(x)
— returns the length of stringx
tolower(x)
— returns all-lowercasedx
toupper(x)
— returns all-uppercasedx
ceiling(x)
— rounds the input numeric parameter up to the nearest numeric valuefloor(x)
— rounds the input numeric parameter down to the nearest numeric valueround(x)
— rounds the input numeric parameter to the nearest numeric value. The mid-point between two integers is rounded away from zero, i.e. 0.5 is rounded to 1 and ‑0.5 is rounded to -1.year(x)
month(x)
,day(x)
,hour(x)
,minute(x)
,second(x)
,fractionalseconds(x)
,time(x)
,date(x)
— returns parts of a datetime for a givencds.DateTime
/cds.Date
/cds.Time
maxdatetime(x)
,mindatetime(x)
— return the maximum or minimum datetime for a givencds.DateTime
/cds.Date
/cds.Time
totalseconds(x)
— returns the total seconds of a datetime for a givencds.DateTime
/cds.Time
now()
— returns the current datetimemin(x)
max(x)
sum(x)
avg(x)
count(x)
,countdistinct(x)
— aggregate functionssearch(xs,y)
— checks whethery
is contained in any ofxs
, may be fuzzy → see Searching Datasession_context(v)
— with standard variable names → see Session Variables
1 These functions work zero-based. E.g.,
substring('abcdef', 1, 3)
returns 'bcd'
You have to write these functions exactly as given; all-uppercase usages aren't supported.
In addition to the standard functions, which all @cap-js
database services support, @cap-js/sqlite
and @cap-js/postgres
also support these common SAP HANA functions, to further increase the scope for portable testing:
years_between
— Computes the number of years between two specified dates.months_between
— Computes the number of months between two specified dates.days_between
— Computes the number of days between two specified dates.seconds_between
— Computes the number of seconds between two specified dates.nano100_between
— Computes the time difference between two dates to the precision of 0.1 microseconds.
The database service implementation translates these to the best-possible native SQL functions, thus enhancing the extent of portable queries. With open source and the new database service architecture, we also have methods in place to enhance this list by custom implementation.
For the SAP HANA functions, both usages are allowed: all-lowercase as given above, as well as all-uppercase.
Runtime Only
The function mappings are available for runtime queries only, but not in CDS files.
Session Variables
The API shown below, which includes the function session_context()
and specific pseudo variable names, is supported by all new database services, that is, SQLite, PostgreSQL and SAP HANA. This allows you to write respective code once and run it on all these databases:
SELECT session_context('$user.id')
SELECT session_context('$user.locale')
SELECT session_context('$valid.from')
SELECT session_context('$valid.to')
Among other things, this allows us to get rid of static helper views for localized data like localized_de_sap_capire_Books
.
Native DB Queries
If required you can also use native database features by executing native SQL queries:
cds.db.run (`SELECT from sqlite_schema where name like ?`, name)
Use Spring's JDBC Template to leverage native database features as follows:
@Autowired
JdbcTemplate db;
...
db.queryForList("SELECT from sqlite_schema where name like ?", name);
Reading LargeBinary
/ BLOB
Formerly, LargeBinary
elements (or BLOBs) were always returned as any other data type. Now, they are skipped from SELECT *
queries. Yet, you can still enforce reading BLOBs by explicitly selecting them. Then the BLOB properties are returned as readable streams.
SELECT.from(Books) //> [{ ID, title, ..., image1, image2 }]
SELECT.from(Books) //> [{ ID, title, ... }]
SELECT(['image1', 'image2']).from(Books) //> [{ image1, image2 }]
SELECT(['image1', 'image2']).from(Books) //> [{ image1: Readable, image2: Readable }]
Read more about custom streaming in Node.js.
Generating DDL Files
When you run your server with cds watch
during development, an in-memory database is bootstrapped automatically, with SQL DDL statements generated based on your CDS models.
You can also do this manually with the CLI command cds compile --to <dialect>
.
When you've created a CAP Java application with cds init --java
or with CAP Java's Maven archetype, the Maven build invokes the CDS compiler to generate a schema.sql
file for your target database. In the default
profile (development mode), an in-memory database is initialized by Spring and the schema is bootstrapped from the schema.sql
file.
Using cds compile
For example, given these CDS models (derived from cap/samples/bookshop):
using { Currency } from '@sap/cds/common';
namespace sap.capire.bookshop;
entity Books {
key ID : UUID;
title : localized String;
descr : localized String;
author : Association to Authors;
price : {
amount : Decimal;
currency : Currency;
}
}
entity Authors {
key ID : UUID;
name : String;
books : Association to many Books on books.author = $self;
}
using { sap.capire.bookshop as my } from '../db/schema';
service CatalogService {
entity ListOfBooks as projection on Books {
*, author.name as author
}
}
Generate an SQL DDL script by running this in the root directory containing both .cds files:
cds compile srv/cat-service --to sql --dialect sqlite > schema.sql
Output:
CREATE TABLE sap_capire_bookshop_Books (
ID NVARCHAR(36) NOT NULL,
title NVARCHAR(5000),
descr NVARCHAR(5000),
author_ID NVARCHAR(36),
price_amount DECIMAL,
price_currency_code NVARCHAR(3),
PRIMARY KEY(ID)
);
CREATE TABLE sap_capire_bookshop_Authors (
ID NVARCHAR(36) NOT NULL,
name NVARCHAR(5000),
PRIMARY KEY(ID)
);
CREATE TABLE sap_common_Currencies (
name NVARCHAR(255),
descr NVARCHAR(1000),
code NVARCHAR(3) NOT NULL,
symbol NVARCHAR(5),
minorUnit SMALLINT,
PRIMARY KEY(code)
);
CREATE TABLE sap_capire_bookshop_Books_texts (
locale NVARCHAR(14) NOT NULL,
ID NVARCHAR(36) NOT NULL,
title NVARCHAR(5000),
descr NVARCHAR(5000),
PRIMARY KEY(locale, ID)
);
CREATE VIEW CatalogService_ListOfBooks AS SELECT
Books.ID,
Books.title,
Books.descr,
author.name AS author,
Books.price_amount,
Books.price_currency_code
FROM sap_capire_bookshop_Books AS Books
LEFT JOIN sap_capire_bookshop_Authors AS author
ON Books.author_ID = author.ID;
--- some more technical views skipped ...
cds compile srv/cat-service --to sql > schema.sql
Output:
CREATE TABLE sap_capire_bookshop_Books (
createdAt TIMESTAMP(7),
createdBy NVARCHAR(255),
modifiedAt TIMESTAMP(7),
modifiedBy NVARCHAR(255),
ID INTEGER NOT NULL,
title NVARCHAR(111),
descr NVARCHAR(1111),
author_ID INTEGER,
genre_ID INTEGER,
stock INTEGER,
price DECFLOAT,
currency_code NVARCHAR(3),
image BINARY LARGE OBJECT,
PRIMARY KEY(ID)
);
CREATE TABLE sap_capire_bookshop_Books (
ID NVARCHAR(36) NOT NULL,
title NVARCHAR(5000),
descr NVARCHAR(5000),
author_ID NVARCHAR(36),
price_amount DECIMAL,
price_currency_code NVARCHAR(3),
PRIMARY KEY(ID)
);
CREATE TABLE sap_capire_bookshop_Authors (
ID NVARCHAR(36) NOT NULL,
name NVARCHAR(5000),
PRIMARY KEY(ID)
);
CREATE TABLE sap_common_Currencies (
name NVARCHAR(255),
descr NVARCHAR(1000),
code NVARCHAR(3) NOT NULL,
symbol NVARCHAR(5),
minorUnit SMALLINT,
PRIMARY KEY(code)
);
CREATE TABLE sap_capire_bookshop_Books_texts (
locale NVARCHAR(14) NOT NULL,
ID NVARCHAR(36) NOT NULL,
title NVARCHAR(5000),
descr NVARCHAR(5000),
PRIMARY KEY(locale, ID)
);
CREATE VIEW CatalogService_ListOfBooks AS SELECT
Books_0.createdAt,
Books_0.modifiedAt,
Books_0.ID,
Books_0.title,
Books_0.author,
Books_0.genre_ID,
Books_0.stock,
Books_0.price,
Books_0.currency_code,
Books_0.image
FROM CatalogService_Books AS Books_0;
CREATE VIEW CatalogService_ListOfBooks AS SELECT
Books.ID,
Books.title,
Books.descr,
author.name AS author,
Books.price_amount,
Books.price_currency_code
FROM sap_capire_bookshop_Books AS Books
LEFT JOIN sap_capire_bookshop_Authors AS author
ON Books.author_ID = author.ID;
--- some more technical views skipped ...
TIP
Use the specific SQL dialect (hana
, sqlite
, h2
, postgres
) with cds compile --to sql --dialect <dialect>
to get DDL that matches the target database.
Rules for Generated DDL
A few observations on the generated SQL DDL output:
- Tables / Views — Declared entities become tables, projected entities become views.
- Type Mapping — CDS types are mapped to database-specific SQL types.
- Slugified FQNs — Dots in fully qualified CDS names become underscores in SQL names.
- Flattened Structs — Structured elements like
Books:price
are flattened with underscores. - Generated Foreign Keys — For managed to-one Associations, foreign key columns are created. For example, this applies to
Books:author
.
In addition, you can use the following annotations to fine-tune generated SQL.
@cds.persistence.skip
Add @cds.persistence.skip
to an entity to indicate that this entity should be skipped from generated DDL scripts, and also no SQL views to be generated on top of it:
@cds.persistence.skip
entity Foo {...} //> No SQL table will be generated
entity Bar as select from Foo; //> No SQL view will be generated
@cds.persistence.exists
Add @cds.persistence.exists
to an entity to indicate that this entity should be skipped from generated DDL scripts. In contrast to @cds.persistence.skip
a database relation is expected to exist, so we can generate SQL views on top.
@cds.persistence.exists
entity Foo {...} //> No SQL table will be generated
entity Bar as select from Foo; //> The SQL view will be generated
On SAP HANA ...
If the respective entity is a user-defined function or a calculation view, one of the annotations @cds.persistence.udf
or @cds.persistence.calcview
also needs to be assigned. See Calculated Views and User-Defined Functions for more details.
@cds.persistence.table
Annotate an entity with @cds.persistence.table
to create a table with the effective signature of the view definition instead of an SQL view.
@cds.persistence.table
entity Foo as projection on Bar {...}
All parts of the view definition not relevant for the signature (like
where
,group by
, ...) are ignored.
Use case for this annotation: Use projections on imported APIs as replica cache tables.
@sql.prepend / append
Use @sql.prepend
and @sql.append
to add native SQL clauses to before or after generated SQL output of CDS entities or elements.
Example:
@sql.append: ```sql
GROUP TYPE foo
GROUP SUBTYPE bar
```
entity E { ...,
@sql.append: 'FUZZY SEARCH INDEX ON'
text: String(100);
}
@sql.append: 'WITH DDL ONLY'
entity V as select from E { ... };
Output:
CREATE TABLE E ( ...,
text NVARCHAR(100) FUZZY SEARCH INDEX ON
) GROUP TYPE foo
GROUP SUBTYPE bar;
CREATE VIEW V AS SELECT ... FROM E WITH DDL ONLY;
The following rules apply:
The value of the annotation must be a string literal.
The compiler doesn't check or process the provided SQL snippets in any way. You're responsible to ensure that the resulting statement is valid and doesn't negatively impact your database or your application. We don't provide support for problems caused by using this feature.
If you refer to a column name in the annotation, you need to take care of a potential name mapping yourself, for example, for structured elements.
Annotation
@sql.prepend
is only supported for entities translating to tables. It can't be used with views nor with elements.For SAP HANA tables, there's an implicit that is overwritten by an explicitly provided
@sql.prepend
.
- Both
@sql.prepend
and@sql.append
are disallowed in SaaS extension projects.
If you use native database clauses in combination with @cds.persistence.journal
, see Schema Evolution Support of Native Database Clauses.
Reserved Words
The CDS compiler and CAP runtimes provide smart quoting for reserved words in SQLite and in SAP HANA so that they can still be used in most situations. But in general reserved words cannot be used as identifiers. The list of reserved words varies per database.
Find here a collection of resources on selected databases and their reference documentation:
- SAP HANA SQL Reference Guide for SAP HANA Platform (Cloud Version)
- SAP HANA SQL Reference Guide for SAP HANA Cloud
- SQLite Keywords
- H2 Keywords/Reserved Words
- PostgreSQL SQL Key Words
There are also reserved words related to SAP Fiori.
Database Constraints
The information about foreign key relations contained in the associations of CDS models can be used to generate foreign key constraints on the database tables. Within CAP, referential consistency is established only at commit. The "deferred" concept for foreign key constraints in SQL databases allows the constraints to be checked and enforced at the time of the COMMIT statement within a transaction rather than immediately when the data is modified, providing more flexibility in maintaining data integrity.
Enable generation of foreign key constraints on the database with:
cds.features.assert_integrity = db
Database constraints are not supported for H2
Referential constraints on H2 cannot be defined as "deferred", which is needed for database constraints within CAP.
With that switched on, foreign key constraints are generated for managed to-one associations. For example, given this model:
entity Books {
key ID : Integer; ...
author : Association to Authors;
}
entity Authors {
key ID : Integer; ...
}
The following Books_author
constraint would be added to table Books
:
CREATE TABLE Authors (
ID INTEGER NOT NULL, -- primary key referenced by the constraint
...,
PRIMARY KEY(ID)
);
CREATE TABLE Books (
ID INTEGER NOT NULL,
author_ID INTEGER, -- generated foreign key field
...,
PRIMARY KEY(ID),
CONSTRAINT Books_author -- constraint is explicitly named
FOREIGN KEY(author_ID) -- link generated foreign key field author_ID ...
REFERENCES Authors(ID) -- ... with primary key field ID of table Authors
ON UPDATE RESTRICT
ON DELETE RESTRICT
VALIDATED -- validate existing entries when constraint is created
ENFORCED -- validate changes by insert/update/delete
INITIALLY DEFERRED -- validate only at commit
)
No constraints are generated for...
- Unmanaged associations or compositions
- To-many associations or compositions
- Associations annotated with
@assert.integrity: false
- Associations where the source or target entity is annotated with
@cds.persistence.exists
or@cds.persistence.skip
If the association is the backlink of a composition, the constraint's delete rule changes to CASCADE
. That applies, for example, to the parent
association in here:
entity Genres {
key ID : Integer;
parent : Association to Genres;
children : Composition of many Genres on children.parent = $self;
}
As a special case, a referential constraint with delete cascade
is also generated for the text table of a localized entity, although no managed association is present in the texts
entity.
Add a localized element to entity Books
from the previous example:
entity Books {
key ID : Integer; ...
title : localized String;
}
The generated text table then is:
CREATE TABLE Books_texts (
locale NVARCHAR(14) NOT NULL,
ID INTEGER NOT NULL,
title NVARCHAR(5000),
PRIMARY KEY(locale, ID),
CONSTRAINT Books_texts_texts // [!code focus]
FOREIGN KEY(ID)
REFERENCES Books(ID)
ON UPDATE RESTRICT
ON DELETE CASCADE
VALIDATED
ENFORCED
INITIALLY DEFERRED
)
Database constraints aren't intended for checking user input
Instead, they protect the integrity of your data in the database layer against programming errors. If a constraint violation occurs, the error messages coming from the database aren't standardized by the runtimes but presented as-is.
→ Use @assert.target
for corresponding input validations.
Using Native Features
In general, the CDS 2 SQL compiler doesn't 'understand' SQL functions but translates them to SQL generically as long as they follow the standard call syntax of function(param1, param2)
. This allows you to use native database functions inside your CDS models.
Example:
entity BookPreview as select from Books {
IFNULL (descr, title) as shorttext //> using HANA function IFNULL
};
The OVER
clause for SQL Window Functions is supported, too:
entity RankedBooks as select from Books {
name, author,
rank() over (partition by author order by price) as rank
};
Using Native Functions with Different DBs
In case of conflicts, follow these steps to provide different models for different databases:
Add database-specific schema extensions in specific subfolders of
./db
:cdsusing { AdminService } from '..'; extend projection AdminService.Authors with { strftime('%Y',dateOfDeath)-strftime('%Y',dateOfBirth) as age : Integer }
cdsusing { AdminService } from '..'; extend projection AdminService.Authors with { YEARS_BETWEEN(dateOfBirth, dateOfDeath) as age : Integer }
Add configuration in specific profiles to your package.json, to use these database-specific extensions:
json{ "cds": { "requires": { "db": { "kind": "sql", "[development]": { "model": "db/sqlite" }, "[production]": { "model": "db/hana" } } }}}
CAP samples demonstrate this in cap/samples/fiori.
There's also a code tour available for that.