Senate of Canada加拿大參議院 · Committee on Human Rights人權委員會 · April 27, 20262026 年 4 月 27 日

A Good Enough Ancestor夠好的祖先

Testimony on AI, Human Rights and the Right to Work關於 AI、人權與工作權的證詞

Chair, deputy chair, honourable senators — thank you for inviting me to appear before you in my individual capacity. My perspective today is shaped by my current work as Taiwan’s cyber ambassador, my fellowship at Oxford’s Institute for Ethics in AI, and my experience as Taiwan’s first digital minister.

During my tenure in day‑to‑day government, our mandate was not just to make people trust technology. It was to make digital institutions worthy of people’s trust.

主席、副主席、各位尊敬的參議員,感謝邀請我以個人身分出席作證。今天我的觀點,來自我目前擔任臺灣數位治理大使的工作、我在牛津大學 AI 倫理研究院的學人身分,以及我作為臺灣首任數位發展部長的經驗。

在我參與日常政府運作的任內,我們的使命不只是讓人民信任科技,而是讓數位制度值得人民信任。

I

Worthy of Trust值得信賴

Your study asks how AI affects human rights, economic security, vulnerable groups, and the international right to work. I would like to offer one frame.

Taiwan’s success against AI‑generated scam ads showed that democracies need not choose between technocratic control and platform inaction. Citizens deliberated on the balance between fraud prevention and freedom of expression. The same principle applies to work: affected people should help set the rules before systems harden into infrastructure.

The right to work in the age of AI must include three practical rights: the right to learn, the right to know, and the right to contest.

The right to learn means training before displacement, not after. Work is more than income. It is apprenticeship. It is belonging, care and dignity.

The right to know means that when AI affects hiring, scheduling, promotion, benefits, education, or public services, the people should know it is being used, who is accountable, and whose data is shaping the decision. A black‑box outcome should not be treated as due process.

The right to contest means those affected can challenge outcomes without needing a degree in computer science. Appeals must lead to repair — correction, compensation, policy change, or retiring the system altogether.

貴委員會的研究關注 AI 如何影響人權、經濟安全、弱勢群體,以及國際工作權。我想提供一個框架。

臺灣成功應對 AI 生成詐騙廣告的經驗顯示,民主社會不必在技術官僚式控制與平台消極不作為之間二選一。公民可以審議詐騙防制與言論自由之間的平衡。同樣的原則也適用於工作:在系統固化成基礎設施之前,受影響的人就應協助設定規則

AI 時代的工作權,必須包含三項實際權利:學習權、知情權與申訴權。

學習權,意味著在被取代之前就接受培訓,而不是在被取代之後才補救。工作不只是收入。工作也是學徒制,是歸屬感、關懷與尊嚴。

知情權,意味著當 AI 影響招募、排班、升遷、福利、教育或公共服務時,人們應該知道 AI 正在被使用、誰應負責,以及誰的資料正在塑造這項決定。黑盒子的結果不應被視為正當程序。

申訴權,意味著受影響者可以挑戰結果,而不需要先取得電腦科學學位。申訴必須導向修復,例如更正、補償、政策改變,或讓該系統完全退場。

II

Ethics of Care關懷倫理

This matters most for those already made vulnerable by existing systems. I am thinking of Indigenous communities, migrant workers, people with physical and mental challenges, children, seniors, racialised communities, and those underrepresented in labour and skills data.

AI must not become a new way to extract knowledge without consent, to score people without context, or to make exclusion more efficient.

At Oxford, my work in Civic AI translates the ethics of care into six governance questions:

Are we hearing those closest to harm? Is someone named and accountable? Does the system work in context? Do those affected have recourse? Does it build solidarity rather than vendor lock‑in? And does it know when to stop?

For high‑impact AI, democracies should require decision traces, independent audits, accessible appeals, public incident reporting, worker and community co‑governance, sunset clauses, and procurement rules that avoid lock‑in.

A democratic system must be interruptible: possible to pause, override, or retire without disrupting essential services people depend on.

Inclusive prosperity is also democratic security. To the familiar agenda of protecting, empowering, and building, I would like to add one verb: co‑governing.

Protect people from harm. Empower them with skills and rights. Build trustworthy public infrastructure. And co‑govern AI with the workers, families, communities, and future generations who will live with these consequences.

這對那些已經因既有制度而變得脆弱的人尤其重要。我想到的是原住民族社群、移工、身心障礙者、兒童、高齡者、被種族化的社群,以及在勞動與技能資料中代表性不足的人。

AI 不應成為新的方式,用來在未經同意下榨取知識、在缺乏脈絡下替人打分數,或讓排除變得更有效率。

在牛津,我研究「仁工智慧」(Civic AI)的工作,是把關懷倫理轉譯為六個治理問題:

我們是否聽見最接近傷害的人?是否有人被明確指定並負起責任?系統是否能在脈絡中運作?受影響者是否有救濟管道?它是否建立團結,而不是供應商鎖定?以及,它是否知道何時該停止?

對於高影響 AI,民主社會應要求決策軌跡、獨立稽核、可近用的申訴機制、公開事件通報、勞工與社群共同治理、落日條款,以及避免供應商鎖定的採購規則。

民主化的系統必須是可中斷的:在不破壞人們所依賴之必要服務的情況下,可以暫停、覆寫或退場。

共融繁榮也是民主安全。在大家熟悉的保護賦權建設之外,我想再加上一個動詞:共同治理

保護人們免於傷害。以技能與權利賦權於人。建設可信賴的公共基礎設施,並與勞工、家庭、社群,以及將承受這些後果的未來世代,共同治理 AI。

III

Bigger Table一張更大的桌子

Could you give a concrete example of co‑governing AI in Taiwan?

能否舉一個你在臺灣實施的具體例子,說明如何圍繞 AI 進行真正的共同治理?

In 2024, we convened what is called an alignment assembly to respond to the harms of generative AI in scam and fraud ads online.

Deepfakes that year were prevalent in every democracy. But as Asia’s most internet‑free country, Taiwan simply could not do top‑down censorship. So we sent SMS text messages to 200,000 random numbers around the island, asking: What should we do together?

We chose 447 people — a mini‑public statistically similar to the wider polity. In tables of ten, they deliberated. The only simple rule was that AI only facilitates, and participants have to convince the nine other people at the same table before their idea bubbles up.

Long story short, we implemented a set of ideas that more than 85% of the mini‑public agreed with — and the other 15% could live with. That included joint liability, know‑your‑customer rules, and slowing down connections by foreign platforms that did not adhere to our liability rules.

Throughout 2025, impersonation and deepfake ads were down by more than 90%.

When people want to show up at the table, the idea is not to do top‑down control. It is to invent a bigger table.

2024 年,我們召開了一場所謂的對齊大會,來應對生成式 AI 所造成的網路詐騙與詐欺廣告傷害。

大家知道,2024 年 deepfake 在所有民主社會都非常普遍。但作為亞洲網際網路自由度最高的國家,臺灣不能採取由上而下的審查。因此,我們向全臺 20 萬組隨機電話號碼發送簡訊,詢問:我們應該一起怎麼做?

我們選出了 447 位民眾,組成一個在統計上近似整體政治共同體的小型公眾。他們以每桌 10 人的方式進行審議。唯一簡單的規則是:AI 只能扮演促進者,而參與者必須先說服同桌另外九個人,他們的想法才會浮現上來。

長話短說,我們實施了一組在這個小型公眾中獲得超過 85% 同意,而另外 15% 也能接受的構想。其中包括共同責任、認識你的客戶規則,以及讓不遵守我們責任規則的外國平台連線變慢。

整個 2025 年,冒名與 deepfake 廣告減少了超過 90%。

當人們想要坐上談判桌時,重點不是由上而下地控制,而是發明一張更大的桌子。

IV

Data as Soil資料如土壤

From an energy‑use perspective, what is the difference between general‑purpose AI and domain‑specific models?

從能源使用的角度,能否說明通用 AI 與領域特定模型有什麼不同?

Currently, in AI training, general‑purpose large models need to anticipate pretty much every use, from folding proteins to folding laundry, in the same model. In doing so, they are incredibly energy‑inefficient to train.

But when we know what we want the model to do — folding proteins, or folding laundry — we can train what are called domain‑specific models, or local models. These can incorporate the community’s input in a way that also protects its data from extraction to the cloud, to big tech companies.

The extractive, very energy‑consuming part can be thought of as “data as oil.” This kind of extraction goes to some large refinery somewhere. But the local way to train small models can be thought of as “data as soil” — the local community tends to these data together.

They fine‑tune it. They continuously train it. Whenever there is bias or an error, the course correction is immediate, instead of waiting for an energy‑consuming run that might take half a year.

In Taiwan, the Ministry of Digital Affairs and the Ministry of Agriculture were set up within about a year of each other, and we worked together so that environmental sensing was not confined to a single production facility, and so that long-term trends around cropping, irrigation, and farmer–buyer relationships could be supported.

We have a programme called TCloud, or Taiwan Cloud, where each small and medium enterprise — including in the agricultural sector — can choose among thousands of solutions. The key is transparency and data portability: the freedom to move between vendors, so the data stays with the operators. If one prediction model or one SaaS vendor no longer fits, they can shift to another.

To bootstrap adoption, the government at one time reimbursed up to 80% of the SaaS purchase. The subsidy goes to SaaS consumers, many of them small and medium enterprises themselves — not to vendors as national or regional champions. The result is interoperability, data sovereignty, and ownership across the sector, so operators can collaboratively train sector-specific models. The same idea is now being taken up by financial-sector data coalitions among banks and insurers.

Our drone agricultural service platform, for example, brings together many small operators who share equipment, certify pilots, and share compliance records. None of them could individually afford the equipment or meet the regulatory burden, but together they reach a horizontal scale previously available only to large agribusiness.

The state’s role is not to pick a national champion. It is to subsidise the freedom to choose.

目前在 AI 訓練中,通用目的的大型模型,需要在同一個模型中預期幾乎所有用途,從摺疊蛋白質到摺疊衣物都包含在內。這樣做的結果,是訓練時極度能源低效率。

但當我們知道希望模型做什麼,例如摺疊蛋白質或摺疊衣物,我們就可以訓練所謂的領域特定模型或本地模型。這些模型可以納入社群的輸入,同時也保護社群資料不被抽取到雲端,不被送到大型科技公司手中。

那種榨取式、非常耗能的部分,可以被想像成「資料如石油」——這種萃取會送到某個大型煉油廠。但以本地方式訓練小模型,則可以被想像成「資料如土壤」,由本地社群一起照料這些資料。

他們進行微調,持續訓練。一旦出現偏見或錯誤,修正可以立即發生,而不是等待一次可能耗時半年的高耗能訓練。

在臺灣,數位發展部與農業部前後約一年內相繼成立,兩部會合作,確保環境感測不限於單一生產設施,也讓種植哪些作物、如何灌溉、農民與買方關係管理等長期趨勢,都能獲得支援。

我們有一個稱為 TCloud,也就是「臺灣雲市集」的計畫。每一家中小企業,包括農業部門的業者,都可以在數千種解決方案之間選擇。關鍵在於透明性與資料可攜性:在供應商之間移動的自由,讓資料留在經營者手中。如果某個預測模型或 SaaS 供應商不再符合其特定情況,他們可以自由轉換到另一家。

為了協助導入,政府曾一度補助 SaaS 採購金額最高 80%。補助流向 SaaS 消費端,其中多半本身就是中小企業——而非任何作為國家級或區域級冠軍的供應商。其結果是部門內的互通性、資料主權與所有權,讓經營者可以共同訓練特定部門的模型。同樣的概念,現在也被金融部門中由銀行與保險公司組成的資料聯盟採用。

例如,我們的無人機農業服務平台集合了許多小型經營者。他們共享設備、認證飛手,並共享合規紀錄。任何一位若單獨行動,都無法負擔設備,也難以承擔監管要求;但他們在一起,就達到了一種水平規模,而這種規模過去只有大型農企業才能取得。

國家的角色不是挑選國家冠軍,而是資助選擇的自由。

V

Without Revealing不需揭露

How do you protect human‑rights information while sharing it across jurisdictions?

如何在強調透明與民主參與的同時,也保護人權資訊?

I would like to make the distinction between data coalitions, where people pool data in a way useful to all members, and the aggregation of data.

It is possible for multiple players, stakeholders, and communities to join a data coalition without sharing any of the raw data. There exists a kind of technology called zero‑knowledge technology that allows people to prove they can do something, that they possess certain knowledge, or that a community can respond to a certain kind of query — without revealing any personally identifiable information underneath.

During the pandemic in Taiwan, we used a privacy‑preserving contact‑tracing method. A venue printed a random number in a QR code at the front door. A person scanned it and sent it to the trusted number 1922 through their telecom.

The telecom knew nothing about what the random number meant. The venue learned nothing — not even the phone number of the visitor. The state learned nothing whatsoever.

But if an infection happened, we could do contact tracing and recursive notification — again, without sacrificing the privacy of people who were not in the affected area.

我想區分資料聯盟與資料聚合。資料聯盟是人們以對所有成員都有用的方式共用資料;資料聚合則不然。

多個行動者、利害關係人與社群,可以加入一個資料聯盟,而不分享任何原始資料。現在有一類技術稱為零知識技術,可以讓人證明自己能做某件事、擁有某些知識,或證明某個社群能回應某類查詢,而不揭露底層任何可識別個人身分的資訊。

在臺灣疫情期間,我們使用了保護隱私的接觸者追蹤方法。場所在門口用 QR code 印出一組隨機號碼。民眾掃描之後,透過電信業者把它傳到可信任的 1922 專線。

電信業者不知道那組隨機號碼代表什麼。場所什麼也不知道,甚至不知道訪客的電話號碼。國家也完全不知道任何資訊。

但是,如果發生感染,我們可以進行接觸者追蹤與遞迴通知,而且同樣不犧牲未在受影響區域者的隱私。

VI

Campfires, Not Wildfires營火,而非野火

Can AI be used to help us relate to one another rather than replace us?

AI 能否用來幫助我們彼此連結、彼此協助,而不是取代我們?

I would like to make a distinction between AI that automates intelligence — what I sometimes call authoritarian intelligence, making decisions on behalf of people — and a different kind of AI.

Ten years ago on social media, many felt their agency had been taken away. Previously, when we followed the same people, we saw the same feed. But this was replaced by a very judgmental AI that personalised our feed and encouraged engagement through enragement. That is very authoritarian.

In Taiwan, we call the other kind assistive intelligence, which assists cross‑conversation between those who would otherwise not agree.

Each campfire is tended by a bounded set of people: ten, a hundred, or a larger bonfire. We have designed prosocial media such as Polis, an open‑source technology used by more than a dozen countries worldwide, including Canada.

Instead of making outrage viral, Polis makes overlap viral. Only ideas that people who otherwise would never agree can both support gain virality. Only bridge‑makers gain virality. In doing so, people heal polarisation and division.

Our demonstration was successful enough that even traditional social media platforms such as X, formerly Twitter, have now adopted a very similar algorithm called Community Notes. It lets people who bridge across ideologies write notes to clarify or add context next to viral misinformation, disinformation, or simply contentious information.

Now we work with all the major social media companies on Community Notes implementation, and on collaborative notes — drafted by AI and instantly corrected by humans, so AI can learn what can translate across communities. For example, between the climate‑justice community on one side and biblical creation care on the other, so they can translate across their vocabularies.

我想先區分兩種 AI。一種是把智慧自動化的 AI,我有時稱之為威權智慧,也就是代替人們做決策。另一種則不同。

例如,十年前在社群媒體上,許多人覺得自己的能動性被奪走了。過去,當我們追蹤同樣的人,我們看到的是同樣的動態消息。但後來這被一種非常有評判性的 AI 取代,它將我們的動態消息個人化,並透過激怒來鼓勵互動。那是非常威權的。

在臺灣,我們把另一種稱為輔助智慧,它協助那些原本無法同意彼此的人進行跨對話。

不過,每個營火仍由一組有界線的人照料:10 人、100 人,或是一個更大的篝火。在臺灣,我們設計了這種利社會媒體,例如 Polis。Polis 是一項開源技術,已被全球十多個國家使用,包括加拿大。

它的理念是,不是讓憤怒病毒式擴散,而是讓重疊點病毒式擴散。在這種利社會媒體上,能夠變得有傳播力的,只會是那些原本不會同意彼此的人也能共同支持的想法。只有搭橋者能獲得傳播力。透過這種方式,人們可以療癒極化與分裂。

我們的示範足夠成功,甚至連 X,也就是原本的 Twitter,這類傳統社群媒體平台,如今也採用了非常類似的演算法,稱為 Community Notes,也就是社群筆記。它讓能跨越意識形態搭橋的人撰寫筆記,在病毒式傳播的錯誤資訊、假訊息,或只是具爭議性的資訊旁邊進行澄清或補充脈絡。

現在,我們與所有主要社群媒體公司合作,協助實作社群筆記,也合作推動協作筆記。協作筆記是由 AI 起草,再由人類即時修正的筆記,讓 AI 能學會哪些內容可以跨社群轉譯。例如在一邊是氣候正義社群、另一邊是聖經關懷受造界社群時,AI 可以學會如何在不同語彙之間轉譯。

VII

Interoperable Governance互通的治理

The UN’s 2024 report on governing AI for humanity points to a global governance deficit. How do we close it?

聯合國 2024 年《為人類治理 AI》指出全球治理赤字。我們要如何填補?

I think the report’s diagnosis is correct: a patchwork of principles without enforceable duties will not govern AI, which is intrinsically a global phenomenon and technology.

Democracies, especially including middle powers, should build interoperable governance. That does not mean identical governance applied everywhere in the same way around the globe. It means auditing standards, incident‑reporting standards, provenance for synthetic media, procurement requirements that avoid vendor lock‑in, and similar stacks that can be made to work across jurisdictions without harmonising every domestic rule.

Canada, Taiwan, and other free and open societies can be peers in this work. The global governance deficit will not be closed by another universal principle, but by enforceable duties that make principles contestable in each and every domain.

我認為該報告的診斷是正確的:只有一堆原則拼湊在一起,而沒有可執行的義務,是無法治理 AI 的。AI 本質上是一種全球性的現象與技術。

我認為民主社會,尤其包括中等強權,應該建立可互通的治理。這並不表示在全球各地以完全相同方式適用同一套治理。它意味著稽核標準、事件通報標準、合成媒體的來源標示、避免供應商鎖定的採購要求,以及類似的技術與制度堆疊,都能跨司法管轄區運作,而不必調和每一條國內規則。

加拿大、臺灣,以及其他自由開放社會,可以在這項工作上成為彼此的同儕。全球治理赤字不會靠另一項普世原則來補上,但可以透過可執行的義務來補上,讓原則在每一個領域中都能被爭辯、被檢驗。
VIII

Sovereignty Is Not Solitude主權不是孤立獨行

How can the federal government best support Indigenous data sovereignty?

聯邦政府如何能最好地支持原住民族資料主權?

Taiwan has 16 Indigenous nations and more than 42 language variations. We see cultural sovereignty — and also transcultural sovereignty, the ability to translate across cultures — as very important.

When we say “sovereign AI” in Taiwan, we do not mean just a national AI model that speaks Mandarin, Taigi, Hakka, and Indigenous languages. We mean a reproducible process for language communities to own the social and cultural composition of the data curated within those language communities.

It also includes alignment assemblies — ways for people to draw boundaries around how AI should enter their community, almost like a code of conduct for AI agents.

These two together enable each community not only to feel that it owns its social and cultural determination when it comes to language‑model training, but also to incorporate transcultural translation capabilities. When one language or culture gains a certain capability, a community can choose to bring it into its own context if it so chooses.

But the agency, the sovereignty, is held by the Indigenous nation or community — not by a top‑down national commission.

No one should be automated out of agency. No community’s knowledge, language, or labour should be treated as raw material without consent. No worker should have to negotiate alone with a black box.

Inclusive prosperity is also democratic security. Canada and Taiwan are both free and open societies, and we know our adversaries are testing our seams of trust.

臺灣有 16 個原住民族,還有超過 42 種語言變體。我們認為文化主權,以及跨文化主權,也就是跨文化翻譯的能力,非常重要。

當我們在臺灣說「主權 AI」時,我們指的不是只有一個會說華語、並替臺語、客語和其他原住民族語言發聲的國家 AI 模型。我們指的是一套可重現的流程,讓語言社群能夠擁有其語言社群內所策展資料的社會與文化組成。

它也包括對齊大會,也就是讓人們能夠劃定 AI 應該如何進入其社群的邊界,幾乎像是為 AI 代理者制定行為準則。

這兩者結合起來,使每一個社群在語言模型訓練上,不只感覺到自己擁有其社會與文化自決權,也能納入跨文化翻譯能力。當某一種語言或文化取得某項能力時,一個社群可以選擇在自己的脈絡中引入它,如果它願意的話。

但能動性與主權,是由原住民族或社群持有,而不是由上而下的國家委員會持有。

不應有人因自動化而喪失自主能動性。任何社群的知識、語言或勞動,都不應在未經同意下被視為原料。任何工作者都不應被迫獨自與黑盒子談判。

共融繁榮也是民主安全。加拿大與臺灣都是自由開放的社會,我們都知道敵對勢力正在測試我們信任的縫隙。

But sovereignty is not solitude. It is the way to protect people and cooperate without surrendering public judgment.

但主權不是孤立獨行。主權是保護人民並與他者合作,同時不放棄公共判斷的方式。