至於 C++, 我剛好有個網路課程只剩幾個小時沒上完. 正好就今天處理了. 畢竟從天竺轉機回台灣這兩天, 加起來睡不到十小時, 累到幾乎無法思考大事. 這麼難用的時間, 碰上簡單的課程和超簡單的最後一個作業, 真是天作之合. (Adjacency List 那個作業就難多了, 題意說明落落長, class 定義在哪裡要自己找出來).
這門課雖然用到一些 C++, 重點還是講資料結構. 例如: Dijkstra’s algorithm wasn’t able to find the shortest path if edge has negative weight. 翻譯成白話是: 假如我們的工作流程中有人扯後腿, 怎麼優化都會鬼打牆. 基本上這堂課還不錯.
當初會上 Coursera 是為了學 AI. 為了發揮最大投資效益, 我買了一年Plus 會員吃到飽來學習 Tensorflow, LLM, 和其他 AI 的訓練課程. 基本上能選的課, 我聽得差不多了, 甚至還產生了心得. 像是同樣的生成式 AI 課程, Google 版重視 AI 倫理, Amazon 版重視 AWS 生態系實作, IBM 版重視如何用在 project 管理, DeepLearning AI 重視知識完整性等等.
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
from typing import Optional
import fire
from llama import Llama
def main(
ckpt_dir: str,
tokenizer_path: str,
temperature: float = 0.6,
top_p: float = 0.9,
max_seq_len: int = 512,
max_batch_size: int = 4,
max_gen_len: Optional[int] = None,
):
generator = Llama.build(
ckpt_dir=ckpt_dir,
tokenizer_path=tokenizer_path,
max_seq_len=max_seq_len,
max_batch_size=max_batch_size,
)
dialogs = [
[{"role": "user", "content": "Are you ready to answer question?"}],
]
print(dialogs[0][0].keys())
print(dialogs[0][0].values())
for i in range(100):
answer = input("Input Question\n")
print ('Your Question {0}'.format(answer))
if (answer == "bye"):
print("LLAMA 2: So long!\n")
break
dialogs[0][0]['content'] = answer
results = generator.chat_completion(
dialogs, # type: ignore
max_gen_len=max_gen_len,
temperature=temperature,
top_p=top_p,
)
for dialog, result in zip(dialogs, results):
for msg in dialog:
print(f"{msg['role'].capitalize()}: {msg['content']}\n")
print(
f"> {result['generation']['role'].capitalize()}: {result['generation']['content']}"
)
print("\n==================================\n")
if __name__ == "__main__":
fire.Fire(main)