Storage
Biggest problem - for me - was the fact that because of the VM in between you had issues with permissions if you wanted to keep data stored on the host. My databases are simply too large to keep in a container that you should be able to throw away on a whim. You tend to fall back to vagrant practices if your initial "docker-compose up" takes about half an hour.Prime examples of this issue are MariaDB / Percona and MongoDB.
For MySQL (and many others) this issue could be bypassed with a "hacked run script" that changes the mysql user in the container to the same UID/GID as the owner of the folder.
Since the database then tries to run as the same physical user-id as your OSX user, data could be stored, files could be locked. There are some other solutions but this one was the easiest for me.
Not so for MongoDB with its memory locking unfortunately.
Forcing you to fall back on a data container that ultimately got stored within the docker-machine. I have a site with a data volume so large it actually forced my to run MongoDB directly on OSX, as I couldn't even get the backup restored within docker.
So my biggest hope was that the built-in virtualization would finally fix the container data sharing problems.
After testing I can happily say that this is the case!
I don't even have data containers anymore. Both for MySQL and MongoDB! Yay!
They are simply mounted from the physical OSX folder.
I can just do a docker-compose down to get rid of both network and containers and my imported data will still be there after the subsequent docker-compose up.
Enter issue numero dos: localhost
With the docker machine, all public ports of your set of containers were mapped onto the IP address of the VM. If you were only running one "cluster" at a time, port 80 and 443 were available for you and you could just use the docker-machines' IP as in your hosts file and everything worked and had a nice DNS name as cherry on the cake.With the new version, this changed. All ports are now mapped on localhost.
And on top of that, OSX doesn't like it when you try to map to privileged ports, so 80 and 443? Not so much. So that means having to fall back on the "auto assigned host port"-functionality.
Fortunately macOS (note to self: Start referring to it with the correct new name) has a lot of networking functionality out of the box. Throw socat in the mix and you have yourself a winning formula (here's how to install it by the way).
So I wrote myself a php script (I'm a php developer after all) that takes a couple values to configure and will create an IP alias on localhost, inspect the docker network related to the folder where you start the script, and map all public ports from the containers of your docker-compose configuration to that IP.
After you ran the script you'll have a fresh new IP available on your mac with all ports linked to their locahost counterpart. I've started on automatically adding a hosts entry as well but haven't had the time to finish it. After all, it's normally a one time thing to do and not vital for me.
Feel free to add it yourself if you require it.
I've added it as a gist on github:
Output will something like this:
> Terminating all socat instances... > remove lo0 alias... > add lo0 alias... > Adding container port forwards, obtaining container list via network 'test_default'... >> Inspecting container 'test_elasticsearch'... >>> 0.0.0.0:32811 -> 172.99.0.1:9200 >>> 0.0.0.0:32810 -> 172.99.0.1:9300 >> Inspecting container 'test_php'... >> Inspecting container 'test_percona'... >>> 0.0.0.0:32808 -> 172.99.0.1:3306 >> Inspecting container 'test_mongodb'... >>> 0.0.0.0:32812 -> 172.99.0.1:27017 >> Inspecting container 'test_mailcatcher'... >>> 0.0.0.0:32809 -> 172.99.0.1:1080 >> Inspecting container 'test_beanstalkd'... >>> 0.0.0.0:32813 -> 172.99.0.1:11300 >> Inspecting container 'test_web'... >>> 0.0.0.0:32814 -> 172.99.0.1:80 > Adding 'test.dev' host entry to containers ... >> 'test_elasticsearch'... >> 'test_php'... >> 'test_percona'... >> 'test_mongodb'... >> 'test_mailcatcher'... >> 'test_beanstalkd'... >> 'test_web'... |
As you can see, my mac now has a dedicated IP for the docker-compose-cluster and it has all relevant ports mapped to it. one "172.99.0.1 test.dev"-entry in /etc/hosts later and I have a nice development environment.
Issue 3: XDebug
Now my xdebug no longer works.I need it working for both the CLI from within the containers and from the web.
After messing around for a while, I found a solution for that as well and the gist already includes it:
Add the the IP that is made available on the host to the containers' /etc/hosts together with the chosen DNS name.
This has 2 advantages:
- xdebug supports hostnames instead of IPs so we can use that
- If you use the DNS entry in your code somewhere, the containers actually know the associated IP
Getting xdebug to work is now simply a matter of 2 configuration entries:
- xdebug.remote_host=test.dev
- xdebug.remote_connect_back=0
The second entry prevents it from trying to figure out another IP to connect to by using the REMOTE_ADDR and sorts, so it is equally vital that you add this.
This will work since the IP is an alias of your localhost interface, which is listened to as well by most editors when they are waiting for a connection.
Notes:
If your container has separate PHP configuration files for CLI and FPM/WEB, you need to make sure to add it to both (most do, so check).
For those who never got xdebug to work on the CLI: This is easily accomplished by adding an export statement to your .bashrc (or whatever login script your shell uses):
export XDEBUG_CONFIG="idekey=<YOUR IDE KEY>"
Happy developing!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.