Further to what Mark said, there are dozens of things that make "batch style workloads" and "transactional style workloads" very different from each other.
The first thing that comes to mind is that HTTP itself is ill-suited to batches with long latencies. Many load balancers don't support long timeouts on http connections. Our own timeouts default to 60 seconds, and this means that some of the configuration changes you'll have to do start to conflict with one of the big things about high throughput situations: the very real conflict between concurrency, latency and TPS. TPS = Concurrency/(latency in seconds). With 1 concurrent request, you need < 10 milliseconds total request/response time to get above 100 TPS.
If I was planning a "batch" processing system, I'd want to have strict controls on concurrency. For that I'd probably want to use some aggressive mechanism to limit concurrency - if the back end is slow, we hold the messages in RAM for the duration. This makes it quite easy to run the gateway out of RAM.
Using a separate port with a relatively low thread count and private thread pool is a way to do that.