{"p":"","h":{"iv":"ROXSYW+cfvEbFHu5","at":"ocxplSQjdRC3tXEtB/9/wg=="}}

@darius that's a lot of words and jargon to say "imitation learners shouldn't extrapolate, but they do"

…all while missing Emily (&c)'s point that the models don't actually "know" *anything* , they just produce statistically probably sequences

1
Share
Share on Mastodon
Share on Twitter
Share on Facebook
Share on Linkedin
Darius Kazemi

@trochee yeah I usually explain it as "very complicated Markov chains" - though of course there are always people who claim that that *is* what knowing is

1
2y
Replies