Schema support in PostgreSQL (for example for Multi-Tenancy)

Hi there,
I’m designing a PostgreSQL DB for my new app, that will be developed initially for a single customer, but will eventually evolve in a multi-tenancy product.

Someone advises to use schemas to separate each client’s data, but eventually it can make it hard to automate the provisioning of new clients in the application (you have to create and configure a fresh schema for each new client).

But even if I’ll not ending up using multiple schemas for core app data, I plan to use them for other purposes, for example to separate test/demo data in the initial alpha/beta stage of the project.

My first question is: how are PG schemas supported in Data Abstract? Are there specialized tools/classes to manage/provision them?
(not using Relativity Server here, but a custom Data Abstrac .Net server)

Thanks,
Paolo

Hello

Data Abstract ‘Schemas’ have no relation to the DB-level schemas. Actually Data Abstract’s goal is to hide any DB server specific from the client apps (f.e. client app code data access code is the same regardless if SQLite or Oracle or something else is used on the server side).
You’ll need to write your own custom code to manage PostrgeSQL schemas. Then the server’s app Schema (Data Abstract Schema this time) needs to be adjusted to point to the needed PostgreSQL schema on the server app startup (this code is really simple, just loop over all Schema’s tables and commands)

Regards

Ok, I got it…

But indeed you can access all your schemas in a DB with one connection, and do cross-schema queries by just qualifying objects in SQL queries.
It’s very common in the Postgres (or even Oracle I think) community to partition data in several schemas under the same DB… And a Postgres dev expects to be able to access all of its schemas in its back-end logic.
For example someone could put membership tables in a schema named membership under the app DB, and simply select from membership.users when he needs users, or join with them.
Another common use of schemas is sharding, like Instagram does, but here it’s not a common requirement, as it starts to be useful in big data scenarios…

So, to narrow my initial question a little bit:

Do I need to drop the usage of schemas completely to work with DA, and go against PG philosophy, or DA supports single DB connection/multiple schemas somewhat? (I mean cross-schema queries, inserts, etc)

Thanx again for your support,
P.

Hello

Hmm, what makes you think that you won’t be able to access different PG schemas from DA? If PG itself supports single DB connection - multiple PG schemas queries then DA will be able to use them.

Regards

Trying to connect with Connections Manager in Schema Modeler to a DB with “public” and “doc” schema.
I want to connect to “doc” schema by default, so in the NPGSQL connection properties I add the custom connection property “searchpath=doc”.
The connection itself is successful, but Schema Modeler gives me the public schema contents, not the doc one… So something is missing/broken here, or I’m doing something wrong.

P.

Ah, I see.
Schema Modeler queries the table list using the query

SELECT table_schema || ‘.’ || table_name FROM information_schema.tables where table_type=‘BASE TABLE’ and table_schema not in (‘pg_catalog’, ‘information_schema’)

Could you check what exactly does this query return for your database?

Ok, it seems a permission issue on my side… :confounded:

I’m trying to narrow it down, but the ALTER DEFAULT PRIVILEGES command in PG is really poor documented! I was under the false impression that I could set defaults for group roles, whereas that command works only with users… All the permissions that should be in place are not there! :rage:

Now the connection is successful and I can get the objects from all my schemas…

But… I can’t see columns of ARRAY type, so I guess they are not supported by DA for being too much PG specific.

Is there some known workaround? Like they are indeed supported but visible only with another driver (like dotConnect), or by constraining them in some other way…

In addition, I have some JSONB fields, and I’m able to see them listed as WideMemo fields… What’s the recommended way to deal with them?
Basically they’re much like an enhanced version of HSTORE columns. I store simple key:value pairs as user metadata associated with a single record (an entry in my app). In addition they need to be validated when inserted/updated by custom rules held in another table (some sort of user defined constraints), and returned to the client as a property of the JSON payload.

P.

The only workaround would be to somehow simplify them to the primitive types supported by DA.

F.e. one could compose a view that would convert JSONB into a JSON string, and then add conversion back via defining custom commands for update/insert (and performing needed validation using DataService events - they allow to inspect and modify data sent in Delta changes)

UPDATE:
From what I see PostgreSQL provides a very nice set of methods that allow to either convert JSONB into a dataset row and back. One just would need to use a data table with static SQL statement to retrieve data and update commands to convert data row back into JSONB

Yes, you’re right.

I’m investigating the most elegant ways to implement read/write semantics for my JSONB columns, and indeed that’s one of the most elegant…

But most of the time I only need read-only data constructed from a mix of relational data and JSONB columns. I think that it should be better, performance-wise, to let PGSQL build the JSON, rather than transforming everything to relational and then transforming back to JSON in DA…
What do you think?

P.