Great perspective! I’m curious — based on the current focus of regulating at AI as general purpose technology, which approach may make more sense to you (especially from the perspective of enabling continued improvement of your product):
1. Legislation at the model-level, focused on an adaptive risk framework, with most restrictions passed to the application layer by proxy.
2. Regulation more heavily weighted at the application layer on sectoral/agency rule-making basis, especially given your point about great models not automatically translating to great applications.
I know these two aren’t mutually exclusive, but sometimes it is presented as such when looking into the opposing schools of thought around AI policy and governance.
Would love to hear your perspective, given the points you made here! I can see an argument for either end.
Great perspective! I’m curious — based on the current focus of regulating at AI as general purpose technology, which approach may make more sense to you (especially from the perspective of enabling continued improvement of your product):
1. Legislation at the model-level, focused on an adaptive risk framework, with most restrictions passed to the application layer by proxy.
2. Regulation more heavily weighted at the application layer on sectoral/agency rule-making basis, especially given your point about great models not automatically translating to great applications.
I know these two aren’t mutually exclusive, but sometimes it is presented as such when looking into the opposing schools of thought around AI policy and governance.
Would love to hear your perspective, given the points you made here! I can see an argument for either end.