Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
It only works for development to online production,
It assumes the local DEV DB is the only one with changes,
It therefore does not actually sync but deletes & replaces the online DB,
It is not modular in design.
I found though it was a good starting point, so started editing it to add:
Sync Local DB1 to Local DB2,
Sync local DB table1 to Table2,
Sync local dev to local test,
Since I'm just learning how to use MySQL and other DBs in Bash thinking that a collaboration to make this a full service DB sync routine would be great. I hope you agree and I want to start by changing to modular design, adding all processing into subroutine "FUNCTIONS", so the main processing can have a very fluid flow and from it call the processing functions that are needed.
By syncing I am guessing you mean update only changes. So the question is how do you determine just changes which could be either updates or new rows. Do the database and tables have identical structures? If db2 or table2 are just for backups and do not change themselves you could possibly use a transaction log. The logs are binary so then you would need to be able to read the log and perform the same operation.
Otherwise about the only way would be to use SQL to compare the files. For new rows using a unique primary key or maybe a date but for changes to existing rows you need to compare each column.
Yes! record by record has to be pulled and any changes need to be updated. Of course that means the DBs and/or tables have to be identified which is source and which is target and if target has changes they have to be added to a special copy of the target, with only those conflicting changes added thereto, so a post process can find those and query to user/operator as to whether they are to be in the update or deleted.
Hope that explains my thinking of this.
Also I would write this in PHP, but you have seen my issues with that. Additionally I added WAMP Server to my Win11 laptop, but also experiencing issues there so not able to copy the DB sources needed for this there to get around the processing problems on the Linux server so have turned to BASH as an alternative.
Interesting subject.
Are we talking about syncing the structure of a database or the contents?
Changes to structure should be built in dev, installed to test (and tested), then installed to production along with the programming changes that required the change in structure.
I’m sure that processes exist for mirroring changes in content between database copies. It’s certainly possible to extract content from one database and load it into another. Think Update else Insert. Not sure I’d bother to determine which field(s) had changed, as long as it was possible to know that a change had happened in the source. I’d use a timestamp and only update records changed since the last mirror…or do it live if possible.
I’m not clear on the “post process” requirement. Where did the change come from that it was not already “approved”?
You'd definitely need to sit down and spec out all the requirements + restrictions, so that you can automate as many updates as possible - see the qns/comments in above posts.
You can also have the program log any it can't handle, for later perusal by humans.
Definitely have a 'test' copy of the both src + tgt systems.
BTW, Perl would give you detailed control & data munging capabilities and it has modules for all kinds of DBs ...
database files are binary, cannot be easily updated. Sometimes database is stored on the disk even without filesystem. Different database engines work differently. Therefore you must read the official documentation (and best practices) about the official way of syncing databases.
for example: https://dev.mysql.com/doc/workbench/...onization.html
Additionally different databases may have different requirements and different ways of syncing.
ETL is what I thought of, too. I became an ETL programmer after architecting and building a data migration…a “one time” process that took over a year to build and test…from a mainframe hierarchal database to a Unix-based relational database. The next jobs were ETL. Eventuality using Informatica, which is almost not a “language” at all, but is a pretty good tool for ETL.
I agree that perl would be an excellent tool for “syncing” databases, if one can’t find a good fit off-the-shelf.
So you can see why this is so confusing, unless you assume (never supposed to assume anything), that the correct file to edit is the 8.2.0 version file.
So looking for a better HOWTO on config for phpmyadmin on Win11.
Cheers!
TBNK
PS
Was supposed to convert my phone to Straight Talk, where I will have unlimited 5G internet via USB tethering, but local WalMart Rep took a job in FT. Worth, so till I make a trip to the DFW metro will still not have iNet.
For now just content as serveral servers are involved but not clustered so update happen to which ever a user is logged into at the time, and the domain control server needs to get all the changes, so it can propagate to and sync with the remote servers.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.