NA
367 8
Created
Updated
Viewed
36k times

I am working on a project which will involve (hopefully *crosses fingers) multiple clients logging on to my site to perform CRUD operations on a MySQL database. Each client will be able to update new information as needed to the database through a GUI.

I plan on writing a PHP script to create the necessary tables for each specific client. EX:
Client A will have Users_A, First_Name_A, Last_Name_A, Address_A
Client B will have Users_B, First_Name_B, Last_Name_B, Address_B
etc etc etc...

Should I create one database connection that handles all the queries of every client or should I create a unique username for each client and restrict access to the other databases? Am I vulnerable to attacks if I just use one login/pass to handle everything?

add a comment
0

4 Replies

  • Votes
  • Oldest
  • Latest
Replied
Updated

No matter what you do there is always a chance you are vulnerable to something. You just need to make sure you implement best practices like watching out for SQL Injection, keeping all of your software up-to-date, applying any vendor patches as necessary, using secure connections (https, ssh, etc), but depends on your situation.

As far as having one or multiple databases, that depends too I suppose. If it was me though, and if it makes sense for your plans, I would probably lean towards one database if that would work.

There are plenty of websites who work with thousands of users (including Ozzu), who use one database, and have one set of credentials to access that database. You just need to make sure access to those credentials is very restricted. Also make sure if using something like Git, that any files that contain credentials are .gitignored.

add a comment
0
NA
367 8
Replied
Updated

What about speed? Let's say I have a table called "schedule". After time, that table will become huge. Will there be a noticeable speed problem when querying that huge table versus each client having their own schedule table? Or would it be too negligible to mention?

Currently I have 8 users on a beta version of my project so it doesn't really matter at this point, but if my idea takes off... that could change.

add a comment
0
Replied
Updated

True, after time certain tables may be come extremely large. That is where you would make good use of putting the right sort of indexes or composite indexes on columns.

At one point when Ozzu was getting quite a bit of traffic I would make use of the slow query log in MySQL to identify queries that were taking over a specified time, such as 2 seconds. I would then take the queries that were slow and run those through MySQL explain so that I could see why they were slow and any opportunities by placing the right sort of indexes to speed them up. Either that, or to rewrite the queries to be more efficient. You could literally change a query that might take a few minutes to run to less than a second by placing the right sort of indexes or doing things in a smarter way.

Eventually though you may get so large that you need to explore other solutions to solve your Big Data problems, but by the time you are that large you will probably have other issues you need to solve before that 🙂

add a comment
0
Replied
Updated

Just out of interest, here are the performance benchmarks for MySQL.

As for being vulnerable to attack using a single username and password, there are mechanisms to remove the authentication credentials in the PHP scripts themselves. You can host them in environment variables or external platforms designed to do this very thing.

You might also consider the idea of having a read-only user and a write user. Unfortunately, a general rule of thumb is that you have been compromised already. Try to make sure your data in transit and at rest is encrypted.

add a comment
0