Photos flagged by the AI are then sent to a person for review.
If an offense was correctly identified, the driver is then sent either a notice of warning or intended prosecution, depending on the severity of the offense.
The AI just "identifying" offenses is the easy part. It would be interesting to know whether the AI indeed correctly identified 300 offenses or if the person reviewing the AI's images acted on 300 offenses. That's potentially a huge difference and would have been the relevant part of the news.