I don't think it's really more effort. If you just want to use it off the shelf, with no customizations (all you can do from the "add-on store"), then it's open your docker-compose.yaml, copy and paste a block, rename the service to whatever you want to call it, and change the image name to whatever image you want to run. That might take me two minutes vs one minute digging through the add-on store thing or likely the same amount of time if I have to go add the repo manually anyway, but is it more effort? No, I don't think so.
If you want to do anything outside of "run this image as it comes", then it's much less effort to manage the containers yourself. I don't think you can even modify anything about the containers running as add-ons, can you?
Here's an example. I use Frigate, which I know is available as an add-on. I also know add-ons can't use the Nvidia runtime, so no GPU decode like I'm using. And can you even mount USB or PCIe devices into the container? Because the whole point of Frigate is to use AI classifiers accelerated by cheap AI accelerators like the Coral. I also don't believe you have any way of defining a mount point for your video, so I can't mount the path for my NVR to the frigate container. How does that work? It uses the media path common to all other services installed by HA? I also have my cams on a segregated subnet that keeps them away from everything, even my other IoT devices. But I don't route all that traffic across VLANs. I mean I could, but then that's unnecessary traffic to and from the router and the camera feeds would go down while updating the router. Instead, my frigate container has an interface on the camera VLAN and also in the IoT subnet with HA. I know you definitely can't to any fancy networking with add-ons.
E: sorry, I didn't mean to start an argument. It's been a long, long day.