I would compare accepting user supplied regular expressions to parsing most sorts of structured user input, such as date strings or markdown, in terms of risk of code execution. Regular expressions are much more complex than date strings or markdown (although safely producing html from untrusted markdown has its own risks) and so represents more room for exploitation, but the basic principle is the same: exploitation involves finding unexpected side effects of the parsing/compilation/matching process.
Most regex libraries are mature and part of the standard library in many languages, which is a pretty good (but not certain) indicator that it's free of major issues leading to code execution.
That is to say, it does increase your attack surface, but it's not unreasonable to make the measured decision to accept that relatively minor risk.
Denial of service attacks are a little trickier. I think most regular expression libraries are designed with performance in mind but do not count mitigation of intentionally slow input among their core design goals. The appropriateness of accepting user supplied regular expressions from the DoS perspective is more library dependent.
For example, the .NET regex library accepts a timeout which could be used to mitigate DoS attacks.
RE2 guarantees execution in time linear to input size which may be acceptable if you know your search corpus falls within some reasonable size limit.
In situations where availability is absolutely critical or you're trying to minimize your attack surface as much as possible it makes sense to avoid accepting user regex, but I think it's a defensible practice.