Hello cdtj,
I would consider myself in category 3 - creating indexes directly in DB.
As for performance concerns. When you don't go crazy about index creation, the only effect you will see is DB growth as indexes take lot of space. We have few tables where indexes take more space than actual data. Just be careful with tables where is lot of inserts/updates. It can slow you down as every write to that table means updating all indexes.
What indexes to create? There is no simple answer to that question and it depends on lot of factors. But mainly it comes down to two - what queries are run against the table and how often. If the query is run once a day, then if it finishes in few minutes it is fine. But if you have query, that gets executed several times every second, then you should be really aggressive with index and try to make it as perfect as possible.
To identify expensive queries, you can use Activity Monitor in SQL Server Management Studio. There is tab Recent Expensive Queries. The ones with most Logical Reads/s are usual suspects.
You can also use SQL Server Profiler to collect information on expensive queries. You can E.g. filter queries that take more than 2000 ms CPU. And you will probably start seeing first queries that might need your attention. Then you take these queries to Management Studio, run it and check execution plan where you will see what indexes were used. I think that here will come handy information that Lindsay pointed to.
Also check pdm_vdbinfo. It should give you an overview of DB use from Service Desk perspective.
And since you are trying to tune DB performance, you might try to check NX.env file and following parameters:
NX_MAX_DBAGENT
NX_VIRTDB_SS_QUEUE*
NX_VIRTDB_AGENT*
Well it is just bunch of hints where to look but I hope it will help you in your quest for responsive Service Desk.