22
submitted 1 year ago* (last edited 1 year ago) by TiphaineRupa@feddit.de to c/programming@beehaw.org

Recently I've experienced a significant increase in merge conflicts at the company I'm currently working at (we hired a couple of junior data scientists and some are not that familiar with git)

Even though those merge conflicts can be a little tedious to resolve, I realized that I personally started to enjoy it - especially using fugitive. Haven't had many conflicts in a while, so almost forgot about Gdiffsplit and how awesome that plugin is...

Now I'm wondering, how often do you have to resolve (more or less complex) merge conflicts?

you are viewing a single comment's thread
view the rest of the comments
[-] TheCulturedOtaku@beehaw.org 3 points 1 year ago

It generally happens often without pre-planning and modularizing sections of developed code (even harder with data science, given the often functional nature of data science code bases). But when it does happen, it doesn't have to be aweful.

To resolve, I generally just create a safe local branch to temporarily complete the merge in that I created form my local copy, and then I pull in the remote copy that I want to merge in with mine using git merge -X theirs ${THEIR_BRANCH_NAME}, which favors their remote changes over yours (I assume origin is more correct than me). Then, conflicts will arise, and you manually perform diffs and checkin the final version with conflicts resolved as a new commit locally. Once complete, it is generally safe to push that temp branch to the remote or your fork for a Pull Request submission, or you may merge the temp branch with the conflict resolves into your running branch. Either way, before the PR, make sure to run tests with the integrated changes first, and to pull merged remote afterwards to fast forward your running copy (such as with git merge -X theirs origin/${HEAD} or git pull origin/${HEAD}

Best answer though: pre plan your code base to include some modularity so that 2 people aren't actively working on the same file at once, encourage daily check-ins to remotes and daily pulls, and ensure that headless unit tests are in place for critical areas, such as logic and boundary cases, at minimum (and that those run in CI/CD). +1 if you use uniform docker tooling to ensure all environments, even local, are the same. And another +1 if you have good telemetry based on APM metrics and traces for after code is integrated.

this post was submitted on 23 Jun 2023
22 points (100.0% liked)

Programming

13361 readers
2 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS