I have heard, in many organisations, speed being cited as a primary benefit of automation. In most cases the use of tools will make things quicker. But faster doesn't necessarily mean good. It's important to assess automation against other criteria.
The development team, not only the testers, should regularly reflect upon and discuss their automation. One approach to this conversation is to have each team member indicate on a scale whether they agree or disagree with the following statements:
I understand what is being checked by our automation
I feel confident that a failure indicates an important problem in the application
I do not manually repeat those checks that are automated
I find it easy to diagnose the cause of a failure
The collated answers will help to uncover whether people understand and agree with coverage, whether they believe the chosen implementation of automated checks is robust, and whether they are capable of investigating problems that are discovered.
Automation may execute quickly, but without assessing its value it may be quickly doing nothing useful.