How many applications are you 'polling' and what types of transports (HTTP, JMS, etc.) are involved?
I would consider implementing a Test Case or a Test Suite to perform this type of function rather than a virtual service. The Test might be deployed as a CVS monitor that runs every 'n' amount of minutes.
Let's assume HTTP as an example:
In the first step, one might drive the input to the Test Case from a dataset (Excel, CSV, database table, etc.) having columns such as application name, endpoint URL, endpoint port, sample request payload, transport type (REST, Webservice), etc. Driving from an input source might allow the Test Case to be a bit more generic. Perhaps an assertion branches the test to either a REST or Webservice Execution XML step.
The next step performs the REST or Webservice call to the application using an acceptable request payload provided by the input source. An assertion traps the HTTP timeout (service unavailable), or if a response is received, an assertion might examine the HTTP-Response-Code value (200 = OK, 500 = server error, etc.).
The results of the 'polling' are written into a database table with a timestamp and other pertinent information about the application.
Then, one could decide on whether to fail the test case or allow it to succeed -- this will depend on your requirements -- so a test will show in the reporting console.
You might consider a suite or an execution model that sets the input source as Global in order to execute tests in parallel. The reason is that HTTP timeouts, should you encounter them, are governed by the HTTP Timeout set on the application the test is calling. Single threading may be a disadvantage because a call could wait a variable amount of time (i.e., up to 3 minutes for an application) for a response. Single threading 10 applications, each having a 180 second timeout, means your test potentially executes in excess of 1,800 seconds or 30 minutes which is most likely unacceptable.