Assumption: you are not talking about fault tolerance; therefore, the notion of failing over a running transaction does not apply.
Architecturally, DevTest does support the concept of running multiple VSEs, Coordinators, and Simulators beneath a single Registry. Additional configuration to identify the subsequent servers (i.e., VSE, Coord, Sims) and communications ports is required.
DevTest does not care if its servers are on the same physical/virtual server or spread across several servers. However, you should consider separating components onto different physical or virtual server OS's because of the added resource drain on CPU & memory. It is also more likely that a physical/virtual server OS goes offline than a DevTest server; therefore, everything on a single server might not be a viable strategy.
From a routing perspective, as of version 10, DevTest does not contain functionality to 'fail over' from one VSE to another if a VSE is offline or not running. Right now, we leave this feature to other tools such as load balancers.
Load balancers provide the features you want to accomplish your requirements. A load balancer can expose a single IP address & port and then route based on a configurable set of routing rules. The service consumers need only know the IP address / port exposed by the balancer. The balancer routes to the underlying VSE or VSEs.
As you develop a strategy, you should consider the architectures of your network, the selected load balancer, the Registry, the VSEs and the individual virtual services.
For example, if you expose an IP through the balancer, can you still access each physical machine to perform the necessary maintenance, review logs, deploy services, etc? RDP and certain connections via the balancer are not desirable.
You can potentially access the Registry through the load balancer's exposed IP, but there is no facility to access multiple Registries via a single load balanced IP on port 2010. So, you probably want to retain server level access to those ports. Some organizations only expose the servers via the load balancer (or virtual IP as some might term it). The latter strategy of only exposing elements via the virtual IP may turn into a non-starter when it comes to things such as deploying a service or managing components.
The Registry, VSEs, Coord, Sims, etc. still need to talk to each other using their respective internal ports. This is best accomplished via the physical/virtual machine-to-machine communications versus communication through the balancer's exposed IP addresses. You certainly do not want to load balance the VSE's internal communication with the Registry and vice versa -- that does not make a lot of sense.
Similarly, virtual service deployments need to occur against each VSE to which the service will be deployed. There is no deploy service 'X' to all running VSEs feature.
Lastly, if your Virtual Services are stateful or use the Shared Model Map during processing, you need a mechanism to make subsequent communications from the consumer application sticky against the VSE that received the original request because these resources are not shared across VSEs.
The short answer is yes, it is doable. The long answer is that a better understanding of each individual customer's specific environment is required to make a definitive assessment and recommendation.