Inligting

Epilepsie verduidelik deur rekenaarneurologie

Epilepsie verduidelik deur rekenaarneurologie


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

'N Paar jaar gelede, op my wiskunde meesters, het ek 'n module in teoretiese neurowetenskap gedoen. Ek het getoon dat die wiskundige raamwerk wat die kursus vir neurone ontwikkel het, met 'n klein aanpassing in die parameters, spike -treine sou produseer wat soortgelyk is aan dié wat tydens epileptiese aansteek waargeneem word. Hierdie parametriese aanpassings kan toegeskryf word aan mutasies in ioonkanaal/reseptorstrukture. Dit was een van die hoogtepunte van die kursus, en toe ek vir 'n wiskunde/bio interdissiplinêre doktorale program kom aansoek doen, het ek dit so in my persoonlike verklaring aangehaal.

Nou kom die onderhoud vinnig nader, en ek is redelik seker dat dit gaan kom, maar ek kan vir my lewe nie 'n verwysing opspoor nie. Na vier jaar uit die spel is my geheue op sy beste troebel.

Soos ek onthou, het dit iets te doen gehad met korttermyn plastisiteit. Normale neurone se reaksie op onverbiddelike hoë frekwensie -spykertreine verval mettertyd, terwyl patologiese neurone dit nie doen nie. Dit kan egter heeltemal verkeerd wees.

Wat is die verwysing na die model wat ek hierbo beskryf? 'n Inleidende/opvoedkundige behandeling word verkies, maar die oorspronklike vraestel is goed.


Ongelukkig kan epileptiese spiking in baie neurale modelleringsstudies baie maklik voorkom, en daarom moet 'n mens baie versigtig wees om die moontlikheid van sekere aspekte van die model as 'n oorsaak van epileptiese aktiwiteit te bepaal. Een moontlikheid, soos u genoem het, is die gebrek aan korttermyndepressie wat langdurige vuur kan voorkom. Maar weereens, 'n eenvoudige toename in die globale opwindende koppeling kan ook soortgelyke dinamika vertoon.

Sterkte met jou onderhoud.


Nagraadse studente

Shenandoah het 'n B.A. in neurowetenskap en wiskunde van Vassar College in 2018, en 'n B.E. in biomediese ingenieurswese van Dartmouth College in 2019. Sy is tans 'n PhD -student in die afdeling Bioingenieurswese en 'n deelnemer aan die NeuroTech -program in Stanford.

Lucas het sy B.S. in neurowetenskap en gedragsbiologie van Emory, waar hy ioonkanale bestudeer en masjienleer-gebaseerde instrumente vir histologie ontwikkel het. Lucas het ook opgelei aan die Princeton Neuroscience Institute, waar hy motoriese koördinasie en interne toestande in die vlieg bestudeer het, en aan die Universiteit van Pennsylvania, waar hy die rol van natriumlek in neuronale hiperexcitabiliteit bestudeer het. Lucas stel daarin belang om die neurale stroombane van persepsie te dissekteer en te ondervra om hul algoritmiese eienskappe te verstaan.

Tyler het 'n ScB in Toegepaste Wiskunde-ekonomie van die Brown Universiteit in 2014 ontvang. Hy is tans 'n PhD-student in die Neurowetenskappe Nagraadse Program by Stanford, en word mede-adviseur deur prof. Shaul Druckmann in Neurobiologie. Hy is geïnteresseerd in die ontwikkeling van teorie- en masjienleerhulpmiddels wat eksperimentele ontwerp vir die hele breinbeeld en stimulering van neurale ensembles rig.

Kang Yong Loh het sy B.S. graad in Chemie aan die Universiteit van Illinois in Urbana-Champaign in 2017 onder leiding van prof. Yi Lu, wat 'n naby-IR-instrument ontwikkel om metaalione in lewende stelsels af te beeld. Hy het toe 'n oorbruggingsjaar in die laboratorium van prof. Xiaogang Liu by die Nasionale Universiteit van Singapoer deurgebring en die rolle van lantaniede in fototermiese en fotoakoestiese meganismes ondersoek. Hy is tans 'n PhD-student aan die Stanford Universiteit, ChEM-H Instituut en Departement Chemie. Sy navorsingsbelangstellings sluit in die ontwikkeling van nuwe chemiese en proteïenhulpmiddels om vrae in die neurowetenskap aan te spreek. Kang Yong word mede-adviseer deur prof. Carolyn R. Bertozzi.

YoungJu het sy BS in fisika en wiskunde van KAIST (Korea Advanced Institute of Science and Technology) in 2018 ontvang en is tans 'n PhD -student in toegepaste fisika aan die Stanford Universiteit. By KAIST het hy saam met proff. YongKeun Park en Won Do Heo oor interdissiplinêre tegnologie -ontwikkeling by die kruising van optika, genetika en rekenaarkunde. Terselfdertyd werk hy ook saam met die beginbedryf Tomocube wat sy vernaamste voorgraadse werk, data-gedrewe holografiese diagnostiek, kommersialiseer om 'n werklike impak op biomedisyne te maak. Hy is nou in die Deisseroth-laboratorium geïnteresseerd in data-gedrewe ontdekking en manipulasie van neurale stroombane met behulp van all-optiese elektrofisiologie, masjienleer en dinamiese sisteemteorie. Hy word mede-adviseer deur dr. David Sussillo by Google Brain.

Marija ontvang haar B.S. in Chemiese Ingenieurswese in 2011 en dubbele M.S. in Biochemiese en Chemiese Ingenieurswese in 2013 van die Universiteit van Belgrado. Tydens haar meestersnavorsing, bestudeer sy DNA -skade -reaksie en bestralingstelselsbiologie in die laboratorium van dr. Sylvain Costes, UC Berkeley. Voordat sy na Stanford gekom het, het sy vier jaar lank in die nywerheid navorsing gedoen op die gebied van lewenswetenskappe en vertaalmedisyne, met die ontwikkeling van nuwe nanotegnologie -platforms. Tans is Marija in die Bioengineering Ph.D. program in Stanford.

Ethan het in 2013 'n ScB in Toegepaste Wiskunde-Biologie van die Brown University ontvang en is tans 'n PhD-student in die Neurosciences Graduate Program in Stanford. Voorheen het hy 'n genetiese metode vir anterograde transsynaptiese etikettering saam met Gilad Barnea aan die Brown -universiteit ontwikkel. Hy is geïnteresseerd in die ontwikkeling van genetiese tegnieke om neurale kringdinamika beter te manipuleer en te ondervra. Ethan word mede-geadviseer deur professor Liqun Luo.

Misha het 'n baccalaureusgraad in Bioingenieurswese en Bedryfsekonomie en Bestuur van Caltech ontvang en is tans in die Bioengineering PhD -program in Stanford. Sy is geïnteresseerd in die ontwikkeling van nuwe opsiene om die optogenetiese gereedskapstel uit te brei en optiese instrumente toe te pas om die dinamika van neurale stroombane onderliggend aan gedrag te ondersoek.

Sam het sy B.S. in Biomediese Ingenieurswese van die Yale Universiteit in 2012, en is tans 'n PhD-student in die Bio-ingenieursprogram by Stanford.

Yoon het in 2013 sy baccalaureusgraad in neurobiologie ontvang en sy M.S. in biologie in 2014 van die Stanford Universiteit. Tydens sy Meestersgraad het Hy die biochemie van postsinaptiese en presinaptiese membraanproteïene in Thomas Sudhof se laboratorium bestudeer. Hy is tans in die Bioengineering Ph.D. program by Stanford, en word mede-adviseer deur dr. Brian Kobilka. Hy werk daaraan om meganismes van verskillende opsins te verstaan ​​en nuwe instrumente vir neurowetenskapnavorsing te ontwikkel.

Noah het 'n Baccalaureus Scientiae-graad van die Johns Hopkins Universiteit waar hy in Biomediese Ingenieurswese en Toegepaste Wiskunde en Statistiek verwerf het. Vandag werk hy aan nuwe neurale stimulasiemetodes, heelbreinbeelding van neurale dinamika in larwale sebravis, en rekenaargereedskap vir die grootdataprobleme wat uit volumetriese neurale beeldingdatastelle ontstaan.

Isaac het 'n B.S. in Ingenieursfisika met 'n spesialiteit in fotonika van Stanford, 'n M.S. in Elektriese Ingenieurswese met 'n fokus op beeldvorming en optimalisering, ook van Stanford, en is tans 'n PhD-student in die Elektriese Ingenieursprogram by Stanford. Hy word begelei deur prof. Gordon Wetzstein in EE.


Modulasie van Sinaptiese Transmissie

Alhoewel hiperexcitabiliteit van individuele neurone aansienlik kan bydra tot epileptogenese, word epilepsie algemeen beskou as 'n netwerksiekte en gedryf deur afwykende sinaptiese interaksie tussen neurone (McCormick en Contreras, 2001). Sinaptiese plastisiteit (veral korttermyn-sinaptiese depressie en fasilitering) kan die funksionele sterkte van sinaptiese koppeling krities vorm, dus is dit moontlik dat spesifieke eienskappe van sinaptiese sendervrystelling 'n netwerk aan aanvalle kan blootstel.

Astrosiete is ideaal geposisioneer om sinaptiese bydraes tot epileptogenese te moduleer. Eksperimentele data [hersien in Araque et al. (1999)] toon dat astrositiese prosesse dikwels op sinaptiese aansluitings geplaas word, wat aanleiding gee tot die idee van “tripartite sinaps” wat bestaan ​​uit presinaptiese bouton, die astrosiet en die postsinaptiese digtheid. Stimulasie van sinaptiese boutons en daaropvolgende vrystelling van neurotransmitter ontlok komplekse kalsiumreaksies in nabygeleë astrocyte, wat toon dat hierdie gliale selle kan sinchroniseer met sinaptiese aktiwiteit [hersien in Haydon (2001)]. Verhoging van astrositiese kalsium kulmineer in die gereguleerde vrystelling van molekules soos glutamaat en/of ATP wat, deur in die ekstrasellulêre ruimte (ECS) te diffundeer en aan toegewyde reseptore te bind, die sinaptiese transmissie kan moduleer (Araque et al., 1998) en die opgewondenheid van aangrensende neurone (Parpura en Haydon, 2000 Reyes en Parpura, 2008) (Figuur 1). Onlangs is getoon dat NMDA-R bemiddelde astrocytiese modulasie van postsynaptiese neuronale prikkelbaarheid epileptogenese in neuronale netwerke kan bevorder (Gomez-Gonzalo et al., 2010). Dit dui op die rol vir drieparty sinapse nie net in epilepsie nie, maar miskien ook in ander siektes van die senuweestelsel wat gebaseer is op afwykende sinaptiese kommunikasie (Halassa et al., 2007).

Figuur 1. Astrocyte reguleer presinaptiese oordrag en postsynaptiese prikkelbaarheid by glutamatergiese sinaps. Na suksesvolle vesikulêre samesmelting kan 'n fraksie vrygestelde glutamaat oorspoel en metabotropiese glutamaat (mGlu) reseptore op die membraan van aangrensende astrosiet bereik. Glutamaataktivering van astrositiese mGlu-reseptore aktiveer 'n kaskade van biochemiese gebeurtenisse wat uitloop op verhoging van inositoltrisfosfaat (IP3) en verhoogde intra-astrositiese vrye kalsiumkonsentrasie. Kalsium veroorsaak 'n vrystelling van astrositiese glutamaat ('n proses wat gliotransmissie genoem word) wat mGlu-reseptore op presinaptiese membraan en/of NMDA-reseptore aan postsinaptiese kant kan bind, en sodoende die waarskynlikheid van sinaptiese sendervrystelling moduleer (PVRYLATING) en/of postsynaptiese opgewondenheid. Vir meer kwantitatiewe besonderhede oor die proses van astrocytenmodulasie van sinaptiese oordrag, sien De Pitta et al. (2011).

Die eerste, na ons wete, berekeningsmodel wat die moontlike betrokkenheid van astrocyte by sinaptiese meganismes van epileptogenese ondersoek het, het die basiese idee van astrocytic �vesdropping ” op neuronale aktiwiteit ondersoek en aangeneem dat die enigste effek van neuronale aktiwiteitsafhanklike astrocytiese kalsiumverhogings om neuronale depolarisasie te bevorder (Nadkarni en Jung, 2003 Bylaag). Hierdie studie het tot die gevolgtrekking gekom dat astrocyte epileptogenese bevorder deur positiewe terugvoer. Soortgelyke gevolgtrekkings is bereik in 'n ander modelleringstudie wat getoon het dat glutamaat wat uit astrasiete vrygestel word, verantwoordelik kan wees vir paroksismale depolarisasieverskuiwings, wat dikwels met epileptiese aktiwiteit geassosieer word (Silchenko en Tass, 2008). Gliotransmitter wat uit astrocyte vrygestel word, kan egter die vrystelling van glutamaat uit sinaptiese boutons op- of afreguleer (Araque et al., 1998 Zhang et al., 2003), en berekeningsmodelleer studies het voorgestel dat sulke negatiewe terugvoermodulasie van sinaptiese transmissie verantwoordelik vir die regulering van spontane paroksismale aktiwiteit in neurale kulture (Volman et al., 2007). Dus, in die konteks van die drieledige sinaps, kan astrositiese sein óf positiewe óf negatiewe impak op neuronale aktiwiteit hê, of dalk 'n kombinasie van beide. Die onthulling van die meganismes agter hierdie diversiteit van modulerende effekte kan help om die toestande vir beslaglegging onderdrukking of bevordering deur astrasiete te verstaan.

Sinaptiese neurotransmitter vrystelling is 'n kalsiumgedrewe proses, dus kan 'n verandering in presinaptiese kalsiumvlakke deur gliale modulasie (byvoorbeeld deur astrositiese aktivering van presinaptiese reseptorkanale) sinaptiese oordrag beïnvloed. 'n Rekenkundige modellering van 'n drieledige sinaps het die moontlikheid ondersteun dat astrocyte die sinaptiese oordrag van inligting kan optimaliseer deur die vlakke van presinaptiese kalsium te beïnvloed (Nadkarni et al., 2008). In 'n ander onlangse modelleringsstudie (De Pitta et al., 2011) het astrocyte óf depressief óf sinaptiese oordrag vergemaklik, wat daarop dui dat die uiteindelike effek van astrocyte op die vrystelling van waarskynlikheid van die sender bepaal word deur die wisselwerking tussen die verskillende mGlu-R's (presynapties geleë) en die basislyn sinaptiese vesikel vrystelling waarskynlikheid. Deur hierdie weë in te sluit, was die model suksesvol om verskeie teenstrydige eksperimentele waarnemings met betrekking tot die rol van astrasiete in die modulasie van sinaptiese transmissie te verduidelik.


Abstrak

Epilepsie is 'n komplekse reeks afwykings wat baie dele van die korteks sowel as onderliggende diepbreinstelsels kan insluit. Die magdom manifestasies van aanvalle, wat net so uiteenlopend kan wees soos déjà vu en reukhallusinasie, kan navorsers dus insig gee oor streeksfunksies en verhoudings. Epilepsie is ook geneties en patofisiologies kompleks: dit behels mikroskopies (op die skaal van ioonkanale en sinaptiese proteïene), makroskopies (op die skaal van breintrauma en herbedrading) en intermediêre veranderinge in 'n komplekse wisselwerking tussen kousaliteit. Daar word al lank erken dat rekenaarmodellering nodig sal wees om oorsaaklikheid te ontwrig, om die verspreiding van aanvalle beter te verstaan ​​en om die effektiwiteit van die behandeling te verstaan ​​en uiteindelik te voorspel. Die afgelope paar jaar is aansienlike vordering gemaak met die modellering van epilepsie op vlakke wat wissel van die molekulêre tot die sosio -ekonomiese. Ons hersien hierdie pogings en verbind dit met die mediese doelwitte om die versteuring te verstaan ​​en te behandel.


4.0 Neurale selle, Anatomie en Elektriese Persoonlikheid van neurone

Om 'n duidelike begrip te kry van hoe die brein werk en hoe ons die wêreld om ons kan waarneem, laat ons kyk na die primêre deel van die brein, naamlik die neurone. Dit is die rekeneenhede van die menslike brein.

Die brein kan opgebreek word in afsonderlike dele wat neurone genoem word. Daar is baie neuronale vorms moontlik, byvoorbeeld, in die visuele korteks is die neuron piramidaal, en in die serebellum word dit die Purkinje -selle genoem.


METODES: Rekenkundige stappe

In hierdie studie het ons rou datastel bestaan ​​uit EEG -opnames van aanvalle met 60 sekondes data voor en na elke beslaglegging. Data is versamel van 42 pasiënte met ten minste twee aanvalle per pasiënt. Ons het netwerkanalise -tegnieke toegepas en beskou elke elektrode in die iEEG -skikking as 'n knoop in 'n netwerk. Die algehele proses van ons algoritme word in figuur 3 uitgelig. Ons het die kruis-spektrummatriks vir elke tydvenster bereken, dan die ooreenstemmende EVC, en dan het ons 'n Gauss-weegfunksie opgelei wat aan elke elektrode 'n waarskynlikheid gegee het dat dit binne die EZ was . Nadat ons die hittekaart vir die EZ -voorspelde stel elektrodes bereken het, het ons dit vergelyk met die kliniese elektrodes vir beide suksesvolle en mislukte chirurgiese uitkomste. Ons wys resultate vir elke sentrum afsonderlik, en ook alle pasiënte saam gegroepeer. Let daarop dat ons die Gauss -weegfunksie slegs met een pasiënt se sentrum opgelei het, sodat ons ons resultate in die middel kon toets. Kliniese prosedures kan meer wissel van sentrum tot sentrum teenoor die veranderlikheid binne die sentrum, dus dit is 'n konserwatiewe benadering om met een sentrum op te lei en dan op alle ander sentrums te toets om te sien of ons analise oor verskillende kliniese prosedures geld. Alle Matlab (R2016b) en Python (v 2.7) kode is publiek aanlyn beskikbaar by Li (2018).

Berekeningstappe vir lokalisering van aanvang van aanvalle: die algoritme verwerk rou ECoG om die volgorde van aangrensende matriks A(t) te bereken. Uit hierdie ry, A (t), bereken dit die volgorde van toonaangewende eievektore, v (t), as 'n netwerk sentraliteitsmaatstaf, die EVC. Algoritme skakel EVC dan om in die volgorde van rangsentraliteit r (t). Uit hierdie volgorde, r (t), bereken algoritme 'n hittekaart wat voorspellings van die EZ genereer. Geel skakering dui die EVC aan van die eerste elektrode wat mettertyd ontwikkel en waarvan die rangsentraliteit, r1(t), word in die plot geïllustreer.

Berekeningsstappe vir die lokalisering van aanvalle: die algoritme verwerk rou ECoG om die volgorde van die aangrensende matriks A (t) te bereken. Uit hierdie ry, A(t), bereken dit die volgorde van voorste eievektore, v(t), as 'n netwerksentraliteitsmaatstaf, die EVC. Algoritme skakel dan EVC om in die volgorde van rangsentraliteit r(t). Uit hierdie volgorde, r (t), bereken algoritme 'n hittekaart wat voorspellings van die EZ genereer. Geel skaduwee dui op die EVC van die eerste elektrode wat mettertyd ontwikkel, waarvan die rang sentraal is, r1(t), word in die plot geïllustreer.

Voorverwerking van data

Alle data het digitale filtering ondergaan met 'n butterworth-kerffilter van orde 4, geïmplementeer in Matlab met die filtfilter funksie (frekwensiebereik van 59,5 tot 60,5). Oor die algemeen is dit bekend dat EEG-data raserig is en verwysingskemas kan 'n beduidende rol speel in stroomaf-data-analise. Ons het besluit om 'n algemene gemiddelde verwysingskema op die data toe te pas voor analise (Ludwig et al., 2009). Hier neem ons 'n gemiddelde sein van alle opname-elektrodes en trek dit van die elektrodes af. Dit het getoon dat dit meer stabiele resultate lewer en verwerp gekorreleerde geraas oor baie elektrodes (Gliske et al., 2016). Ons het seker gemaak dat alle elektrodes van die daaropvolgende analise uitgesluit word as hulle ingelig is dat artefakte in hul opname deur klinici sou wees.

Bereken en rangskik Nodale sentraliteit oor tyd

Normaliseer rang evolusie seine

Rekenaarfunksie -vektor van genormaliseerde rangseine

Vir elke genormaliseerde sein het ons die desiele in tyd onttrek, die plekke waar die sein ewe veel integreer tot 10% van die totale oppervlakte onder die kromme, dit wil sê, punte in genormaliseerde tyd waar die sein integreer na 0.1, 0.2, 0.3 en so aan totdat die einde van die sein bereik is. Dit gee 'n 10-dimensionele vektor vir elke sein wat dien as 'n funksie vektor.

Elektrodegewigtoewysing gebaseer op kenmerkvektore

Sodra ons funksievektore vir elke sein bereken het, projekteer ons die funksies in 'n 2D -beginselkomponent (PC). Dit is gedoen deur te aanvaar dat elke kenmerkvektor 'n waarneming is, daarom is die analise uitgevoer in ruimte × tyd. Ons het rekenaaranalise uitgevoer en die funksies op alle elektrode en pasiënte op die eerste en tweede rekenaars geplot. Elke elektrode (datapunt in Figuur 4A) is gemerk volgens of die elektrode in die klinies geannoteerde EZ-streek was en of die chirurgiese reseksie 'n sukses of 'n mislukking was. Ons het toe 'n weegfunksie geskep oor die 2D -rekenaarruimte, wat 'n gewig sou toeken aan 'n elektrode op grond van hul ligging in die rekenaarruimte.

(A) Eerste en tweede PCA -komponentverspreiding. Punte in rekenaarruimte: 1. Groen +: resekteerde elektrodes in suksesvolle uitkomste, 2. Rooi •: nie -geresekteerde elektrode in suksesvolle uitkomste, 3. Swart +: geresekteerde elektrodes in mislukte uitkomste, en 4. Swart •: nie -geresekteerde elektrode in mislukte uitkomste. Die plotte in elk van die vier insetsels toon die gemiddelde genormaliseerde rangsentraliteitsein vir punte in die streke wat deur oranje reghoeke begrens word. Die skaduwee streke in die erwe dui die 1 standaardafwykingsgrense aan. Die groen en rooi lyne in die plotte dui onderskeidelik die begin en einde van 'n beslaglegging aan. Die geel sirkel beklemtoon die belangstelling, waar daar baie groen merkers is. (B) 'n Voorbeeld van die Gaussiese gewigsfunksie, waar die kleur die gewig verteenwoordig van 'n elektrode wat binne die EZ is. Die vier plotte aan die linkerkant verteenwoordig die Gaussiese gewigsfunksie vir elke kwadrant onderskeidelik. Die regterkantse plot is die som van die vier Gaussiese funksies, wat die finale Gaussiese gewigsfunksie gee.

(A) Eerste en tweede PCA komponent verspreiding. Punte in PC-spasie: 1. Groen +: geresekseerde elektrodes in suksesvolle uitkomste, 2. Rooi •: nie-geresekseerde elektrodes in suksesvolle uitkomste, 3. Swart +: herskeurde elektrodes in mislukte uitkomste, en 4. Swart •: nie-geresedeerde elektrodes in mislukte uitkomste. Die plotte in elk van die vier insetsels toon die gemiddelde genormaliseerde rangsentraliteitssein vir punte in die streke wat deur oranje reghoeke begrens word. Die geskakeerde streke in die plotte dui die 1 standaardafwykingsgrense aan. Die groen en rooi lyne in die plotte dui onderskeidelik die begin en einde van 'n beslaglegging aan. Die geel sirkel beklemtoon die streek van belang, waar daar baie groen merkers is. (B) 'n Voorbeeld van die Gaussiese gewigsfunksie, waar die kleur die gewig verteenwoordig van 'n elektrode wat binne die EZ is. Die vier plotte aan die linkerkant verteenwoordig onderskeidelik die Gauss-gewigsfunksie vir elke kwadrant. Die regterhandse plot is die som van die vier Gaussiese funksies, wat die finale Gauss-gewigsfunksie gee.

Om hierdie gewigsfunksie te genereer, het ons dit in ewe groot vierkantige partisies (100 × 100 langs eerste en tweede hoofkomponente) gediskretiseer. Die gemiddelde genormaliseerde rang-handtekening oor alle datapunte is vir elke partisie bereken. Die handtekeninge vir die vier hoekafskortings word in figuur 4A getoon. Die vorms van die gemiddelde genormaliseerde ranghandtekeninge oor partisies verander op 'n ietwat deurlopende manier. Deur vertikaal van die onderkant van die rekenaarruimte na bo te beweeg, gaan die rangtekeninge oor van 'n konkawe na 'n konvekse vorm. Deur van links na regs te beweeg, skuif die handtekening horisontaal: vorentoe (na regs) as die partisie aan die onderkant van die rekenaarruimte is, en agtertoe (na links) as dit bo -aan die rekenaarruimte is.

Ons hipotese is dat die booghandtekening wat links onder in Figuur 4A vertoon word, die handtekeninge van die EZ voorstel, want dit is die streek van die rekenaarspasie wat die mees geïsoleerde kanale het wat van pasiënte met suksesvolle uitkomste kom (groen + punte). Trouens, die onderste gedeelte van die rekenaarrooster toon die booghandtekening. Daarom is die weegfunksie die hoogste in hierdie streke en verval as 'n funksie van afstand van hierdie streke. Ons het 'n gewigsfunksie gedefinieer as die som van vier tweeveranderlike Gauss-agtige funksies (Vergelyking 5, Figuur 4B) soos getoon in Vergelyking 5. Die 2D PC-spasie word verdeel in vier kwadrante wat deur 'n oorsprong gedefinieer word. Sien Figuur 4B (links) met oorsprong (−100, −100).

Opleiding Oorsprong van die Gaussiese weegfunksie

w i (x, y) = exp - α i (x - μ) T ∑ i - 1 (x - μ)

αek - eksponensiële verval faktor vir ekkwadrant

definieer die posisie en die gemiddelde vektor, onderskeidelik

∑ i - kovariansiematriks van ekde kwadrant

Rekenaargraad van ooreenstemming en statistiese analise

Oor alle pasiënte, elektrodes en beslagleggingsgebeurtenisse het ons 'n versameling DOA-waardes. Ons verkry dan twee verdelings: (a) die verspreiding van DOA vir alle elektrodes wat ingeplant is by pasiënte wat suksesvolle behandelings gehad het, en (b) die verspreiding van DOA vir alle elektrode wat ingeplant is by pasiënte wat behandelings misluk het. Ons toets dan of daar 'n beduidende verskil in DOA -verspreiding tussen hierdie twee pasiëntgroepe is, deur die Wilcoxon -rangsomtoets te gebruik om te toets vir statistiese verskille. Hierdie nie-parametriese toets is gekies, aangesien die data nie gewaarborg is om aan die normaliteitsvoorwaardes vir 'n Student s'n te voldoen nie t toets (Whitley & amp, Ball, 2002). Boonop het ons ook 'n analise in die middel van die sentrum bygevoeg, waar ons al die data kombineer en toets of die DOA-verdelings vir suksesvolle teen mislukte uitkomste aansienlik verskil.

Bo en behalwe hierdie analise, voeg ons ook 'n min-max-skaal by om die mate van ooreenkomste binne elke sentrum te normaliseer, sodat sukses en mislukking op dieselfde skaal vergelyk kan word.

Hoëfrekwensie-ossillator: qHFO-detektor

Ons het ons algoritme vergelyk met die qHFO -algoritme wat in Gliske et al. (2016), wat 'n sensitiewe HFO -detektor gebruik, maak dan HFO's wat deur artefakte vervaardig is, weer op. Vorige werk het getoon dat monsternemingstempo's van 1000 Hz in staat is om HFO's op te neem, maar net 60% van die gebeurtenisse van Gliske et al. (2016). Daarom het ons slegs pasiënte met monsternemingssnelhede ≥ 1,000 Hz en beskikbare interiktale data ontleed. Dit het gelei tot drie pasiënte van NIH en twee pasiënte van JHU, met 'n totaal van 13 afsonderlike aangetekende datastelle. Die datastelle wat hier ontleed is, het 'n gemiddelde opname van 7.1 min, 83 totale elektrodes ontleed en 10 elektrodes binne die klinies geannoteerde EZ-stel gehad. 'N Paar geringe aanpassings het die qHFO -algoritme op hierdie data gebruik.

Ons gebruik 'n enkele algemene gemiddelde verwysing wat toegepas is op alle geanaliseerde intrakraniale elektrodes (soos vroeër beskryf), eerder as om die verwysing tussen diepte-elektrode kanale en roosterkanale te skei, soos in Gliske et al. (2016). Die popDet artefakverwerpingsmetode kon ook nie gebruik word nie, aangesien dit steekproeftempo's van minstens 2 000 Hz vereis.


Wat is die rol van berekening in neurowetenskap?

Die William Newsome van Wu Tsai Neurosciences Institute bespreek motivering, bewussyn en die fassinerende uitdagings van rekenaarwetenskaplike neurowetenskaplikes in hierdie gesprek van die direkteur.

Wu Tsai Neurosciences Institute Direkteur William Newsome verken die kruising van KI en neurowetenskap met HAI Denning mede-direkteur Fei-Fei Li.

In hierdie direkteursgesprek is Fei-Fei Li, mededirekteur van HAI Denning, se gas William Newsome, die Harman Family Provostial Professor in Neurobiologie aan die Stanford University School of Medicine en die Vincent V.C. Woo Direkteur van die Wu Tsai Neurowetenskappe Instituut.

Hier bespreek Li en Newsome die rol van berekening in die neurowetenskap, die uitdagings wat rekenaarwetenskaplike neurowetenskaplikes die hoof kan bied, of begrip van die brein op molekulêre vlak kan lei tot beter neurale netwerke, AI se motiveringspektrum en die ingewikkelde definisie van bewussyn as dit kom by beide natuurlike intelligensie en kunsmatig.

Fei-Fei Li: Welkom by HAI se direkteursgesprekke, waar ons vooruitgang in AI met leiers op die gebied en regoor die wêreld bespreek. Vandag by my is professor Bill Newsome. Ek is baie, baie opgewonde om hierdie gesprek met Bill te voer. Ons het gedurende ons loopbaan gesprekke gevoer, en ek was 'n groot bewonderaar van Bill se geleerdheid en leierskap. Hy is die professor in neurobiologie by Stanford School of Medicine en die direkteur van die Wu Tsai Neurosciences Institute by Stanford. Bill het beduidende bydraes gelewer tot die begrip van neurale meganismes onderliggend aan visuele persepsie en eenvoudige vorme van besluitneming. As hoof van die Wu Tsai -instituut fokus hy op multidissiplinêre navorsing wat ons help om die brein te verstaan ​​en nuwe behandelings vir breinafwykings te bied en die gesondheid van die brein te bevorder.

So welkom, Bill. Ek sien baie uit na hierdie gesprek.

Bill Newsome: Heerlik om hier te wees, Fei-Fei. Lekker om jou weer te sien.

Li: Kom ons begin met die definisie en praat van die kruising van AI en neurowetenskap. Wat beskou u as die rol van berekening in u veld en die werk van Wu Tsai -instituut?

Newsome: Wel, dit is 'n goeie vraag. Berekening is vandag uiters belangrik op die gebied van neurowetenskap. Daar is twee of drie verskillende maniere waarop ek daardie vraag kan beantwoord, maar laat ek hierdie een op jou probeer. Ons het eintlik 'n subveld genaamd computational neuroscience. Ons het fakulteite in hierdie gebied hier by Stanford aangestel, en ons hoop om meer aan te stel. Mense vra my soms: "Wat is dit?" En ek sou dit so stel. Berekening in die neurowetenskap het ongeveer drie verskillende gebiede wat regtig baie belangrik is. Die eerste area is teoretici: dit is mense wat eintlik probeer om teoretiseer en algemene beginsels te onttrek oor hoe die brein rekenaars is, hoe die brein verteenwoordig, hoe die brein aksie produseer.

Die tweede area wat ek neurale netwerk soorte mense sou noem: modelleerders, mense wat regtig verstaan ​​hoe om diep konvolusionele netwerke te doen en verstaan ​​hoe om herhalende neurale netwerke te doen, en eintlik eenvoudige speelgoedprobleme modelleer wat ons weet die senuweestelsel oplos. As ons kan uitvind hoe hierdie netwerke hierdie probleme oplos, kan ons insig kry in nuwe hipoteses oor die senuweestelsel.

En dan is 'n derde manier waarop berekening die neurowetenskap werklik beïnvloed, deur middel van hoë-end data-analise. Soos met baie wetenskapgebiede, word ons nou groter, ryker, meer gesofistikeerd en soms baie duister datastelle uit die brein as wat ons ooit in die menslike geskiedenis kon kry. En om werklik te verstaan ​​hoe om met hierdie data om te gaan, hoe om dit te behandel, hoe om statistiese slaggate te vermy, is uiters belangrik.

Ek dink al drie die berekeningsgebiede is baie belangrik vir die neurowetenskap. Ek dink dat verskillende rekenaarwetenskaplike neurowetenskaplikes kan presteer in twee, of soms selfs al drie, maar ons benodig vandag meer mense soos hierdie in die neurowetenskap, nie minder nie, want die uitdagings is groter as wat hulle ooit was.

Li: Wat is die grootste uitdagings wat jy voel benodig vir hierdie soort rekenaarneurowetenskaplike?

Nuus: Wel, laat ek u 'n paar voorbeelde in die werklikheid gee, miskien gee u een waar berekening eintlik 'n leidende rol gespeel het en ander benaderings tot neurowetenskap agterbly, en een waar berekening moet inspring en 'n rol moet skep om te kan skep begrip.

'N Gebied wat ek vir u sou gee, is iets in die senuweestelsel wat integrasie genoem word, en dit is bekend aan almal wat 'n berekening geneem het. Dit tel letterlik gebeure op wat gebeur, letterlik integreer 'n paar tydreekse en sê hoeveel jy aan die einde het. Dit blyk op baie gebiede 'n baie belangrike probleem in die senuweestelsel te wees, insluitend besluitneming, maar baie eenvoudig. Dit beweeg net jou oë. So ons weet wanneer jy jou oë van hierdie punt na hierdie punt beweeg, gee sekere neurone 'n klein sarsie in 'n kaart van oogbewegingsruimte in die brein, en wanneer hulle die sarsie kry, gaan die oë daar uit. Die wonderlike ding is dat hulle daar bly nadat hulle daar aangekom het, al is die uitbarsting weg.

En die teoretisering, die teorie van berekening handel oor integrasie: hoe kan u neurale seine van die uitbarsting neem, 'n waarde integreer wat die oë in posisie hou totdat 'n dier gereed is om die oë weer te beweeg? Ons weet 'n paar dinge oor die fisiologie, ons weet 'n paar dinge oor die berekening, ons weet 'n paar dinge oor die konnektiwiteit van breinstrukture wat oogbewegings veroorsaak. Wat ons regtig ontbreek het, was eintlik anatomie, in hierdie geval.

Dit het geblyk dat verskillende rekenkundige teorieë wat fisiese beginsels bevat, dit kan verklaar, maar om te weet watter een eintlik in die brein werk, het ons die mikroanatomie nodig gehad van hoe selle werklik met mekaar verbind is. Dit is 'n voorbeeld waar rekenkundige teorie eintlik die botoon gevoer het en 'n paar anatomiese vrae gemotiveer het.

Maar een wat baie luisteraars hierna vandag sal aanklank vind, is die voorbeeld van diep konvolusionele netwerke wat menslike prestasie in visuele kategorie nader.

Li: Oortreffend in sommige gevalle.

Nuus: absoluut oortref in sommige gevalle. Mense se werk is hier onveilig, want hierdie netwerke word so goed in visuele kategorisering. And this presents a really interesting problem, because you’re going to train a deep convolutional network that can do these things that seem almost magical, and we know almost everything there is to know about them, right? We know exactly the connections between the layers of the trained network, we know the signals that are passing, we can see the dynamics that exist, we can measure their performance. But there is still this deep angst, an intellectual angst, I’d say, among neuroscientists, and I think among some people in the AI community, that we still don’t understand how that’s happened.

What are the algorithmic principles by which you take an array of pixels and you turn it into faces, and distinguish among faces? And somehow, the deep physical and computational principles are not there yet. We don’t understand how these things are working. We understand the learning algorithm, and maybe that’s as deep as it’ll get at some point. But here’s a situation where I think computation needs to step in and teach us this, both for the artificial networks, and then for the real networks in our brains that recognize faces.

Li: Bill, I want to elaborate on that because neural networks, especially in visual recognition, are dear and near to my heart. On the one hand it’s phenomenal, right? We have these hundreds-layered, sometimes even thousand-layered convolutional neural network or recurrent network algorithms that are just very complex and can perform phenomenally well. When it comes to object recognition, some of these networks do surpass human capability. But in the meantime, if you look inside under the hood of these algorithms, while they’re humongous, they’re also extremely contrived compared to the brain.

I’ll just take one example of the neuron-to-neuron communication. The way that it’s realized in today’s neural network algorithm is a single scalar value, whereas the synaptic communication in the brain, as we learn more, and your colleagues will tell us, is far more complex. The neural signaling is not just one kind of neural signaling. I would love to hear more about this.

Also, on a little more system level, our brain is this organic organ that has evolved for at least the past 500 million years the mammalian brain is about 100 million years, and it has different parts, and different modules, and all that. And today’s neural network is nowhere near that kind of complexity and architecture.

So on one hand these humongous deep learning models are doing phenomenally well. On the other hand, they’re also very contrived compared to the brain. And I’m just very intrigued: from your perspective, as we learn more about these computational realities of the brain at the molecular level, at the synaptic level and the system level, do you see that we’re going to have different insights how to build these neural networks?

Newsome: I hope so, Fei-Fei. I think this is one of the deepest intellectual questions that computationally-minded neuroscientists argue about, and that’s to what extent are AI, and what I call NI — natural intelligence — going to converge at some point and really be useful dialogue partners? And to what extent are they simply going to be ships passing in the night, or they’re going to be parallel universes? Because there are these dramatic differences, as you point out.

One individual neuron ­— and our brain contains about 100 billion of them — is incredibly complex: incredibly complex shapes and incredibly complex biophysics, and different types of neurons in our brain have different types of physics. They’re profoundly non-linear, and they are hooked together in these synapses and ways that form circuits, and understanding and mapping those circuits is a big fundamental problem in neuroscience.

But something that should give all of us great pause is that there are these substances that are released locally in the brain called neuromodulator substances, and they actually diffuse to thousands of synapses in the space around them in the brain, and they can completely change that circuitry. This is beautiful, beautiful work by Eve Marder, who spent her career studying this neuromodulation. You take one group of neurons that are hooked up in a particular way, spritz on this neuromodulator, and suddenly they’re a different circuit, literally.

Li: Yeah, that’s fascinating. We don’t have that computational mechanism at all in our deep learning architecture.

Newsome: And another feature of brain architecture, that you and I have talked about offline together, is that brain architecture is almost universally recurrent. So area A of the brain has a projection to area B. You can kind of imagine that as one layer in the deep convolutional network to another layer. But inevitably, B projects back to A. And you can’t understand the activity of either area without understanding both, and the non-linear actions, the dynamical interactions that occur to produce a state that involves multiple layers simultaneously.

Many of us think today that understanding those dynamical states that are distributed across networks are going to be the secret to understanding a lot of brain computation.

I know that recurrence is starting to be built into some of these DCNs now. I don’t know where exactly that field sits, but that certainly is one of the ways you get dynamics.

Dynamics are, again, another universal feature of brain operation. They reflect the dynamics in the world around them, and the input but also the dynamics in the output. You’ve got to have dynamical output in order to drive muscles to move arms from one place to the other, right? So the brain is much richer, in terms of dynamics.

Another thing about the brain is it operates on impressively low power.

Li: I know, I was going to say the 20-watt problem. That’s dimmer than any lightbulbs we have. We hear about these impressive neural networks like GPT-3 or a neural architecture search or burst, an image that algorithms are all burning GPUs much more massively.

So how do you think about that?

Newsome: Well, I don’t think about it very much, except that our contrived devices are very, very inefficient and very wasteful.

We have a colleague at Stanford, Kwabena Boahen, who studies neuromorphic engineering, and trying to build analog circuits that compute in a much more brain-like way. And his analog circuits certainly are much, much more efficient in power usage than digital computers. But they haven’t achieved nearly the level of impressive performance and the kinds of sort of cognitive-like tasks that DCNs have achieved so far. So there’s a gap here that needs to be crossed.

Li: Yeah, I think this is a very interesting area of research. You mentioned the word cognitive, and I want to elaborate on that because I know we started talking about computational neuroscience, but cognitive neuroscience is part of neuroscience, and also in the field of visual where I sit.

First of all, half of my PhD was cognitive neuroscience. Second of all, in the past 30 years, I would give cognitive neuroscience a lot of credit in the field of vision to show to the AI world what are the problems to work on, especially the phenomenal work coming from the 70s and 80s in psychophysics by people like Irv Biederman, Molly Potter, and then getting to neurophysiology and cognitive neurophysiology, like Nancy Kanwisher, Simon Thorpe, showing us the phenomenal problem of object recognition, which eventually led to the blossom of computer vision object recognition research in the late 90s and the first 10 years of the 21st century.

So I want to hear from you, do you still see a role of cognitive neuroscience in, I guess, two sides of this: one is in today’s AI, which I think I have an opinion, but also AI coming back to help?

Newsome: I am not nearly as well versed or trained in cognitive neuroscience as you were. That was your graduate training. I think in a very simple-minded way about cognitive neuroscience, that may make our colleagues, may make you shudder, Fei-Fei, I’m not sure. I was trained as a sensory neuroscientist, trained in the visual system, the fundamentals of Hubel and Wiesel, and the receptive-field properties in the retina. And then the first processing in the brain, and then the cortex.

I was sort of getting into the brain, back in the 1970s and 1980s, thinking about signals coming from the periphery. We all called ourselves sensory neuroscientists, but there was another whole group of neuroscientists who were coming the opposite direction. They were having animals make movements: a right eye movement, like we’ve already talked about, or arm movements, and they’re looking at the neurons that provide input to those movements, and then they’re tracing their inputs back into the brain. And this was a motor science kind of effort.

And the sensory side and the motor side has enjoyed listening to each other talk, but they didn’t really talk about it very much. But they had to meet eventually. And I think one part of my career was playing a part in hooking those two things up. And we did it by studying simple forms of decision making. So giving animals sensory stimuli — that was my comfort zone — asking animals to make a decision about what they were seeing, and then make an operant movement. And if they got it correct, they got a reward.

Well, how did the sensory signals that are the result of a decision get hooked up to steering the movement? And that there, you’re squarely in cognition land. Some people refer to that as the watershed between sensory systems in the brain and motor systems in the brain. How do you render decisions?

You can think about sensory representations in the brain as kind of being like evidence, providing evidence about what’s out there in the world. But then you can think about these cognitive structures in the brain that have to actually make a decision, render a decision, and instruct movements. You can’t move your eyes to the right and to the left at the same time. Not going to happen. Sometimes you simply have to make decisions.

That’s how I kind of got into the cognitive neuroscience. And I think it’s one of the most interesting fields in all of neuroscience right now. I am hoping that AI and computational theory . well, I know that computational theory is making contributions because some of the integration problems, integration of evidence from noisy stimuli, those kinds of theories, those kinds of theoretical models have deeply informed my own work in decision making. So computation theory are certainly making contributions.

I sometimes wonder about the other way around: What is that we are learning from vision and neuroscience that could inform AI? And you and I have had conversations about that as well.

Li: Right. So I’ll give you an example of a group of us, Stanford neuroscience people like Dan Yamins, Nick Haber, they are the young generation of researchers who are actually taking developmental cognitive inspiration into the computational modeling of deep learning framework. They are building these learning agents that you can think of as learning babies as a metaphor, where the AI agent is trying to follow the rules of the cognitive development of early humans, in terms of curiosity, exploration and so on, and learn to build a model of the world and also improve its own dynamic model of how to interact with this world.

I think the arrow coming from cognitive developmental science actually is coming to AI to inspire new computational algorithm that transcends the more traditional, say, supervised deep learning models.

Newsome: One example where neuroscience has really led the way for artificial intelligence and for convolutional networks and artificial vision is the deep understanding of the early steps of vision in the mammalian brain, where set field structures filtering for spatial and temporal frequencies have particular locations in space the multiscale nature of that assembling those units in ways that extract oriented Gabor filters. That’s typical of the oriented filter, typical in the early stages of cortex processing in all mammals. And that now is baked into artificial visual.

That was the first thing. You don’t even bother to train a DCN on those steps. You just start with that front end, and that front end came honestly from neuroscience, from the classic work of Hubel and Wiesel, as you know. Coming through some principle psychophysics and statistical analysis input from people like David Field. I think if I had to point to one thing that neuroscience has given to AI it would be the front end of a lot of the vision.

Li: That’s a really big thing, so absolutely.

Newsome: Fei-Fei, let me just say that the other challenges there — and I think yeomans of the young generation who are working on visual would acknowledge, I think everyone acknowledges this, really — is that the artificial visual systems even though they can surpass human performance in some cases after they’re trained, the learning process is so different for humans from the artificial systems.

The artificial systems need tens of thousands of examples to get really, really good, and they have to be labeled examples, and they have to be labeled by human beings, or what is your gold standard. Whereas I have this little 5-year-old daughter at home, and by the time she was two or three, she had looked at a dozen examples of elephants, and she could recognize elephants anywhere. She could recognize line drawings, photographs, different angles, different sizes, different environments. And she can play Where’s Waldo on the common children’s magazine. And this is profoundly different.

So here’s an example where human cognitive neuroscience and the study of visual development in young humans and young animals, I think, presents a real challenge for artificial vision, artificial intelligence.

Li: Yeah, I actually wanted to emphasize on a point you just made because it truly, using your word, is profound because the way humans learn biologically, your NI, natural intelligence system learn is so different. I still remember 20 years ago, my first paper in AI was called “One-Shot Learning of Object Categories,” but until today, we do not have a truly effective framework to do one-shot learning the way that humans can do, or few-shot learning. And beyond just training example-based learning, there is unsupervised learning, there is the flexibility and the capability to generalize, and this is really quite a frontier of just the overall field of intelligence, whether it’s human intelligence, or artificial intelligence.

Newsome: Yeah, I think both AI and NI have to be appropriately humble right now about this. We’re almost equally ignorant about exactly how that happens.

Li: In a way, I almost think it has a social impact for those of us who are scientists. We need to share with the public about the limitations because the hype talk of AI today, of machine overlord and all that, is built upon some of the lack of knowledge of the limitations of the AI system, and also the phenomenal capability of human intelligence to stumble.

Bill, I want to switch topic a little bit because I think what you are doing at Wu Tsai goes beyond some of these more lower level modeling. One of the most important charter mission of Wu Tsai is neuro disorder and healthcare-related. Here, I’m going to say something that I hope that you can even disagree on: should we view AI and machine learning more like a tool for our researchers and doctors, clinicians to use this modern tool of data-driven methodology to help discover mechanism of diseases and treatments? Are there any examples of work at Wu Tsai like that? Just in general, how do you view AI through that lens of studying neuro disorders?

Newsome: Yeah, that’s a really good question. Is AI really more of a tool to enable us to get on with the business of doing serious biology, or do the actual processes and algorithms and architectural structure of AI lend understanding to their correspondence inside the brain?

And I think the answer is both. So let me just give you a little Bill’s-eye-view of neuro disease. There are some neurological diseases that have psychiatric comorbidities, where the biggest problem is simply that cells in the nervous system, somewhere in the nervous system, start dying, for reasons we don’t know yet. Parkinson’s disease is an example, where it’s a particular class of cells, the dopamine arginic cells that start dying, and we don’t know why. And Alzheimer’s disease, cells start dying all over the brain. There’s some areas that are particularly sensitive but by the time an Alzheimer’s patient shows up in the clinic complaining of symptoms, they’ve already lost probably billions of nerve cells certainly hundreds of millions of nerve cells by the time they become symptomatic.

And those diseases, I think, are going to be solved ultimately at a molecular and cellular level. Something is going wrong in the life of cells, and whether that’s in the metabolic regime, whether it’s in the cleanup regime, keeping the cell whole and safe and free of pollutants, whatever it is, the secrets to that are going to be in cell biology, and AI can certainly help us tremendously just by providing tools to assemble all the data that we’re acquiring at that genetic and molecular level inside of cells.

On the other hand, there are neural diseases that smack of the more systems type of pathology the problems are not lying in single cells probably. So you take some of the symptoms of Parkinson’s disease, for example, the tremor and things like this, they can actually be rectified by putting stimulating electrodes inside the brain and doing a process called deep brain stimulation. Any of the listeners who aren’t familiar with this can just Google “deep brain stimulation,” or go to YouTube, and you can see amazing videos of remission of symptoms with this. Not a cure for Parkinson’s but it’s a treatment for the symptoms.

And there are things like depression, which themselves don’t kill people it’s not like it’s a progressive degenerative disease. In depression, people come in, they come out. It’s a dynamic kind of process. It smacks of the state system inside the brain, and that state can go through multiple systems, some of which are depressed, some of which we would characterize as more normal or positive kind of outlook.

And that kind of dynamics of complex systems, I think, is going to be part and parcel of the AI computational neuroscience thrust: understanding how these densely interconnected networks, based on certain inputs, can assume different states, fluctuate between them. I think could give some insight into the actual disease itself.

So I think it depends on which disease you’re talking about, on whether AI is primarily going to be a tool or whether it might actually suggest some intellectual insights into the sources and explanations for some of them.

Li: That speaks of the broadness of machine learning AI’s utility in this big area. Where we sit at HAI, we see already a lot of budding collaborations between the school of Medicine, Wu Tsai Institute, and HAI researchers, where all of these topics are touched. I know there’s reinforcement learning algorithms in neurostimulation for trauma patients. Or there is computer vision algorithms to help neuro-recovery, in terms of physical rehabilitation. And also all the way down to the drug discovery, or those areas. So very excited to see this is also a budding area of collaboration between AI and neuroscience.

Newsome: I think that this is going to grow, that interface is going to grow. I think ultimately we’ll diagnose depression much better through rapid real-time analysis of language that people use, and adjectives that they use, than expensive interaction with physicians. I don’t think the algorithms are going to replace physicians, but they’ll be very useful.

Can I bug you about something I’m wondering about?

Newsome: In some of the first discussions that led up to the formation of AI that I was privileged to sit in upon, we raised a question of when you’re a biologist you think about a human or animal performing a task doing a discrimination task, or making a choice between this action that action. And the question that comes up is motivation. What is the organism motivated to do at the time?

And this gets very complicated. In all kinds of social situations with humans, we worry about what’s fair, and we may do things that are against our economic interest because we are striking out for fairness. There are these values, there are these motivations, there are these incentives. And I wonder: what is the motivation in an artificial agent? To the extent that I know anything about motivation in artificial agents, it’s minimizing some cost function. Is that all there is to understand about incentives and motivation?

Is that all these complex feelings that we have, are they just reduced to cost functions, or is there a whole world there that AI needs to discover that they haven’t even scratched the surface of yet?

Li: This is a beautiful question. When you talk about motivation, I was thinking: what kind of reward objective mathematical functions I can write? And I come up very simple ones, like in the game of Go, I maximize the area that my own color block occupy. Or in the self-driving car, I will have a bunch of quantifiable objectives, that is: stay in the lane, don’t hit an obstacle, go with the speed, and so on. So yes, the short answer is motivation is a loaded word for humans, but when it comes to today’s AI algorithm, they are reduced to mathematical reward functions, sometimes as simple as a number, or what we call scalar functions. Or a little more complex, a bunch of numbers, and so on. And that’s the extent.

This clearly creates an issue with communication with the public because on one hand, people are claiming incredible performances of vision language, especially those confusing language applications where you feel the agent actually is talking to you, but the under the hood it’s just an agent optimizing for similar pattern it has seen.

So we don’t have a deep answer to this at all. My question to flip this is: both as a neuroscientist, as well as more objective observer of AI, do you see this as a fundamentally insurmountable gap, hiatus, between artificial intelligence and natural intelligence that would potentially touch on philosophical issues like awareness, consciousness? Or do you think this is a continuum of computation? At some point when computation is more and more sophisticated, things like motivation, awareness, or even consciousness would emerge?

Newsome: Well let me answer that in a couple of ways. First, I don’t think that it’s a fundamental divide. I don’t think that there’s anything magical inside our brains associated with the molecules carbon and oxygen, hydrogen and nitrogen. I do this thought experiment sometimes with groups where I say, “I’ve got a 100 billion neurons in my brain, but imagine I could pull one of them out and replace it with a little silicone, or name your substance, neuron that mimicked all the actions of that natural neuron perfectly. It received its inputs, it gave its outputs to the downstream neurons, can even modulate those connections with some neuromodulator substance, it can sense some: would those still be Bill with this one artificial neuron inside my brain along with a 100 billion natural ones?” And I think the answer would be yes. I don’t think there’d be anything fundamentally different about my consciousness and about my feeling. And then you just say, “Well what if it’s two?”

Newsome: What if it’s three? And you get up to 100 billion, after a while, and my deep feeling is that if those functional interactions are well mimicked by some artificial substance, that we will have a conscious entity there. I think it may well be that entity needs to be hooked up to the outside world through a body because so much of our learning and our feeling comes through experience. So I think robotics is a big part of the answer here. I don’t like the idea of disembodied conscious brains inside a silicon computer somewhere. I’m deeply skeptical of that.

Li: It’s like the movie, Her.

Newsome: Yeah. I don’t think the divide’s fundamental, I don’t think it’s magic. But the only place I know that consciousness exists, and that these intense feelings take place, is in the brains of humans, certainly in a lot of other mammals, maybe all other animals, but probably in birds and others as well.

But I think that neuroscience is kind of suited now, and artificial intelligence as it’s constituted now, there may be a fundamental divide just because they start with different presumptions, different goals of the kind that we’ve discussed here for the last half hour.

Does that make any sense to you, or am I just babbling here?

Li: No, no, it’s making some sense to me but let me try to share my point of agreement and disagreement. My point of agreement is that, like you said, where we are, the deep learning algorithm and also our understanding of the brain, is still so rudimentary. And from the AI point of view, it’s just so far, the gap between what today’s AI or foreseeable AI can do to what this natural intelligence from computation to emotion to consciousness, it’s just so far I really don’t see that the current architecture and mathematical guiding principle can get us there. What I don’t have an answer is when you say 100% of your neurons are replaced.

Newsome: But perfectly mimicking the functional relationships of the originals.

Li: First of all, I don’t know what perfectly mimicking means in that because we’re in counterfactual scenario. Like, maybe we can perfectly mimic up to this point of your life where your neurons are replaced, but what about all the future? Is that really Bill? It’s almost a philosophical question, that I don’t know how to answer. But I think this consciousness question is at the core of some of neuroscience researchers’ pursuit, as well as a very intriguing question for AI as a field.

Newsome: So consciousness, I call it the C word, and mostly I don’t utter the C word. But it is maybe the single most real, as Descartes thought, and interesting feature of our internal mental lives, so it’s certainly worth thinking about, both from a neuroscience point of view and an artificial intelligence point of view.

A lot of the muddiness about that word comes because we use it to mean so many different things. We use it to mean a pathological state, somebody’s unconscious rather than conscious. We use it to mean a natural state called sleep, and people are asleep and not conscious. Or we use it to mean: I’m conscious of this TV screen in front of me and I’m not conscious of the shoes on my feet at this particular point in time. Or we can use it at a much higher level, that I am conscious of the fact that I’m going to exist, that I’m going to die, that I have a limited time on this planet and I need to find as much meaning from those years as possible.

And so you have to sort of hone in on what you’re really trying to understand with the word. I think the one that’s most common is simply what we’re conscious of at any moment, what we’re aware of-

Newsome: A phenomenal awareness, like the philosophers call it. Many of your listeners will be familiar with David Chalmers and his notion of the hard problem of consciousness and the easy problem of consciousness. If you’re not, it’s definitely worth getting familiar with them. Chalmers says that there’s some things that neuroscientists are going to solve. We’re going to solve the easy problems. We’re going to solve attention, we’re going solve memory, we’re going to solve visual perception, we’re going solve visual coordination: all these features of conscious beings we’re going to solve, because we can see in principle the outlines of an answer to them, even though we’re far from having any details.

But what he says is the hard problem is why should some biological machinery hooked together in a particular way, why should there be any internal feelings at all that go along with that, that we’re conscious of? Conscious of being happier, conscious of being sadder, conscious of seeing red, or conscious of seeing green. Why is there that phenomenal experience?

And one of the things I’ve learned as a neurobiologist is that I can ask questions up to a certain point in animals, like I can electrically stimulate different parts of the brain, and I can elicit very sophisticated kinds of responses and behavioral responses, and yet, I do not know what that animal is actually feeling at the moment. There’s this first-person experience of our beings, and presumably of other animals’ beings, that is very difficult to know how we would describe that in any kind of objective terms, any kind of math that you could-

Newsome: Qualia, exactly. And that’s the hard problem. I’ll tell you, most neuroscientists, the large majority of neuroscientists, would deny that there was a hard problem of consciousness. It’s almost an ideology, honestly, because neuroscientists believe in the supremacy of their field. It’s a very deep commitment, and that once we get a mature neuroscience 500 years from now, however long it takes, there will be nothing about the brain or the mind left to explain.

And people who take the hard problem of consciousness seriously say, “It may be that an intrinsically third-person science cannot account for what is intrinsically first-person experience.” That there just may be a category of mismatch. And so I give credibility to that, but I’m an unusual neuroscientist in giving credibility to that.

Li: Yeah, you’re a very open-minded neuroscientist. I remember as a physics student at Princeton some physicists said that humanity is incapable of understanding the universe to its deepest depth because we are part of it, and it’s hard to study within something, the totality of that thing.

But just to be a little more concrete on a consciousness note for AI, one of the narrower definitions of consciousness is awareness not even this deep awareness, but contextual awareness. And one of my favorite quotes of AI come from the 70s, that goes like this (and keep in mind, this is the 70s). It says: “The definition of today’s AI is the computer can make a perfect chess move without realizing the room is on fire.” Of course, we can change the word chess move to a different game, like Go or anything else, but today’s AI algorithm, not to mention the deeper level of awareness, does not even have that contextual awareness. And this is five decades later, so we have a long way to go.

As scientists, especially someone like you with long career in neuroscience and computational neuroscience, and someone like me who is trained both in AI and neuroscience, inevitably I think about the question: is there a higher science, or a unified framework to think about intelligence? We talk about the current gap between NI — natural intelligence — and AI. We talk about the potential continuum that we can close this gap. But maybe at the end of that is a unified science, or Newton’s laws, or the general relativity of intelligence. Do you foresee that? Do you have any conjecture of that?

Newsome: I don’t think I’m smart enough to see that far into the future, Fei-Fei. I think that’s a laudable thing.

I was a physics undergrad, so it may be physics but it was part of my formative experience, even though I never got terribly sophisticated. And I like unification, I like coherence, I like the idea that physical reality is one thing fundamentally, and that we ought to be able to understand it through successful approaches to physical reality.

I do have this question about consciousness out there that I don’t know if our third-person science will be able to deal with, but in terms of intelligence, of general intelligence, flexible intelligence, contextual sensitivity, I think we should be able to get there. I think Chalmers would say, “That’s part of the easy problem of brain function or consciousness.” And I think we should be able to get to some general principles.

I think we might not get to them until we take the AI systems and put them on the robots that actually have to make their way in this world in order to survive. Only then will the robot care whether the room is on fire.

Li: Yeah, actually I really want to accentuate your point because you said multiple times you believe in a physical body, and I totally agree with you. You evoked Descartes’ “I think therefore I am,” which removed the physical body but just looking at evolution as well as today’s working AI and robotics and machine learning, I also think that the embodiedness of agency is critical, at least in the development of human intelligence, and will become more and more relevant and critical in artificial intelligence. So I agree with you.

But I’m still hoping the Wu Tsai Institute or HAI, hopefully together, will host a new generation of scientists that one day will give us the universal law of intelligence and unify some of these questions in intelligence, don’t you think?

Newsome: Yeah, Fei-Fei, it’s interesting. I’m not quite that ambitious yet. In just thinking about fundamental principles of computation of the brain, and information processing, and information extraction, and organization, and decision making, and memory, and learning, I don’t even know that there are going to be a single set of principles for the brain. I think that different brain structures have very, very different architecture. The cortex is very different from the basal ganglia. The basal ganglia are very different from the spinal cord. The spinal cord is very different from the hippocampus. And all of those are very different from the cerebellum. And I think that the computational principles, even though they all work with action potentials, they all work with neurotransmitters, they’re all subject to neuromodulation, there are certain things like that, that are universal probably across the brain.

But the rules of computation and algorithms that are instantiated in those neurons of very different structures may simply be different. It may be that there’s going to be one theory and one set of principles for the cerebellum, and a different one for the cerebral cortex. So I don’t know that there’ll be any general unified theory of mammalian brain function. It may be a collection of allied theories. And then you’ve got to have theories about how those circuits interact with each other, or produce behavior.

But I think there’s so much headroom for progress, so much new data being acquired and I’m optimistic. I think this is a great time to be a neuroscientist. It’s obviously a great time to be an artificial intelligence person, and I hope that these two great human enterprises do converge in a meaningful sense at some point.

Li: I think that is such a great note to end on, and just to echo your call for action, especially to the students out there, that it’s a phenomenal time to be considering these two fields, and especially the intersection of the two fields. I think some great discoveries and innovation will come about because of this intertwined joint adventure between neuroscience and AI.

So, thank you so much, Bill. We’ve often said that you and I can sit here and just talk for hours and hours, and every time I talk to you, it’s just so inspiring and just so humbled that I’m your colleague, and working on this together.

And thank you to our audience for joining us. You can always go to the HAI website to visit us, or go to our YouTube channel to listen to other great discussions with leading AI and thought leaders around the world. Dankie.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Leer meer.


Study identifies druggable brain gene network implicated in epilepsy

'Network-biology' reveals how multiple interconnected genes in the brain can be simultaneously targeted for antiepileptic treatment. Credit: Duke-NUS Medical School

A new study has identified a network of genes in the brain that, when disrupted, causes epilepsy the results also predicted that a known anti-epileptic drug works to restore the network's function. This discovery has not only offered a new target for developing anti-epileptic drugs, but the original 'network biology' approach developed by the team may provide a quicker and cheaper way to accelerate the discovery of novel drug candidates that are effective treatments for disease.

The study was published on 13 December 2016 in the journal Genoom Biologie and was led by Associate Professor Enrico Petretto from the Centre for Computational Biology at Duke-NUS Medical School (Duke-NUS) in Singapore and in collaboration with Imperial College London.

Epilepsy is a common and serious neurological disorder which afflicts many people, and is characterised with a tendency to have recurring, unprovoked seizures. While many anti-epileptic treatments have been discovered, a third of epileptic patients still suffer from seizures. As a result, the search for effective treatments and cures is still ongoing. Research to identify new anti-epileptic drugs has been largely unsuccessful because the current widely-employed process of targeting one-gene-at-the-time, to find suitable targets and develop new drugs, is very slow and expensive.

Assoc Prof Petretto's lab has developed a 'network-biology' approach to identify a network of genes in the brain that underlie risk of developing both common and rare epilepsies. The approach identifies entire pathways, networks of genes and processes of disease that can be studied. Most importantly the approach can be used to predict how to manipulate the disease-networks with drugs to return them to a healthy state.

Using brain samples of healthy subjects, Assoc Prof Petretto's team built gene networks that were expressed across the human brain. Then, a using large database of mutations and genes associated with epilepsy, they discovered a gene network associated with rare and common forms of epilepsy. This 'epileptic-network' contained 320 genes and is called M30. This network represents a previously unknown convergent mechanism regulating susceptibility to epilepsy broadly.

The analyses the team did in mouse models of epilepsy suggested that the down-regulation or disruption of M30 contributed to the manifestation of epilepsy. To confirm this, the team employed computational approaches, based on 'network-biology', and leveraged public data resources to predict the effect of drugs and small molecules on the network to restore the 'epileptic-network' to its healthy state. The effect of valproic acid, a commonly-used anti-epileptic drug, to restore M30 to health was confirmed during this process. Other new drugs were also identified, which may have been acting in combination with valproic acid.

The results of the study suggest that targeting the M30 'epileptic-network' with combination of drugs may be a viable strategy to treat epilepsy.

"Like a mechanic who can predict how to fix or rewire a broken or malfunctioning set of highly interconnected gears in a car, my group uses a 'network-biology' approach to identify interconnected genes in the brain and other organs that may be dysregulated in diseases. We then predict how these genes can be targeted for restoration of function to a normal healthy state," explained Assoc Prof Petretto.

"Our approach allowed us to identify a network associated with epilepsy and provide a compelling proof-of-principle by predicting a known anti-epileptic drug to target the network. We were able to do this in a matter of few months only using publicly available data, while a typical effort to identify anti-epileptic drugs would usually take years."

The next step for Assoc Prof Petretto is to exploit his novel 'network-biology' approach for the discovery of new drugs for specific forms of epilepsy and other untreatable neuropsychiatric disorders. This will allow his team to leverage their approach on a larger scale and identify new potentially effective drugs that can be tested in a clinical setting.


The future is bionic

The importance of “reverse-engineering” the brain in this way was touched upon in The Conversation in February.

Building a bionic brain. Mariusz Szczygiel

This was in response to an Australian Academy of Science think-tank report on Australian neuroscience research. The report discussed a program to “create a bionic brain.”

“Bionic” literally means the intersection of biologie en elektronika. To “create a bionic brain” in electronic circuits will certainly require electronic engineers.

Cochlear implants, for example, successfully combine electronics with brain science, and are known as “bionic ears”.

Research on electronic “medical bionics” for brain disorders ranging from vision loss, to epilepsy treatment, to Parkinson’s disease to Alzheimer’s disease is also underway.

But as well as such applications, I believe that by mimicking the brain’s circuits, engineers also can advance fundamental understanding of neuronal computation in the brain.

In turn, this enhanced understanding will ultimately lead to engineered systems that replicate and surpass the capabilities of human intelligence.


Kyk die video: Neurology - Topic 16 - Epilepsy patient (September 2022).