Introducing Badger: A fast key-value store written purely in Go

chenxu 发表了文章 • 0 个评论 • 193 次浏览 • 2017-08-18 17:52 • 来自相关话题

We have built an efficient and persistent log structured mer... 查看全部


We have built an efficient and persistent log structured merge (LSM) tree based key-value store, purely in Go language. It is based upon WiscKey paper included in USENIX FAST 2016. This design is highly SSD-optimized and separates keys from values to minimize I/O amplification; leveraging both the sequential and the random performance of SSDs.


We call it Badger. Based on benchmarks, Badger is at least 3.5x faster than RocksDB when doing random reads. For value sizes between 128B to 16KB, data loading is 0.86x - 14x faster compared to RocksDB, with Badger gaining significant ground as value size increases. On the flip side, Badger is currently slower for range key-value iteration, but that has a lot of room for optimization.


Background and Motivation


Word about RocksDB


RocksDB is the most popular and probably the most efficient key-value store in the market. It originated in Google as SSTable which formed the basis for Bigtable, then got released as LevelDB. Facebook then improved LevelDB to add concurrency and optimizations for SSDs and released that as RocksDB. Work on RocksDB has been continuously going on for many years now, and it’s used in production at Facebook and many other companies.


So naturally, if you need a key-value store, you’d gravitate towards RocksDB. It’s a solid piece of technology, and it works. The biggest issue with using RocksDB is that it is written in C++; requiring the use of Cgo to be called via Go.


Cgo: The necessary evil


At Dgraph, we have been using RocksDB via Cgo since we started. And we’ve faced many issues over time due to this dependency. Cgo is not Go, but when there are better libraries in C++ than Go, Cgo is a necessary evil.


The problem is, Go CPU profiler doesn’t see beyond Cgo calls. Go memory profiler takes it one step further. Forget about giving you memory usage breakdown in Cgo space, Go memory profiler fails to even notice the presence of Cgo code. Any memory used by Cgo would not even make it to the memory profiler. Other tools like Go race detector, don’t work either.


Cgo has caused us pthread_create issues in Go1.4, and then again in Go1.5, due to a bug regression. Lightweight goroutines become expensive pthreads when Cgo is involved, and we had to modify how we were writing data to RocksDB to avoid assigning too many goroutines.


Cgo has caused us memory leaks. Who owns and manages memory when making calls is just not clear. Go, and C are at the opposite spectrums. One doesn’t let you free memory, the other requires it. So, you make a Go call, but then forget to Free(), and nothing breaks. Except much later.


Cgo has given us a unmaintainable code. Cgo makes code ugly. The Cgo layer between RocksDB was the one piece of code no one in the team wanted to touch.


Surely, we fixed the memory leaks in our API usage over time. In fact, I think we have fixed them all by now, but I can’t be sure. Go memory profiler would never tell you. And every time someone complains about Dgraph taking up more memory or crashing due to OOM, it makes me nervous that this is a memory leak issue.


Huge undertaking


Everyone I told about our woes with Cgo, told me that we should just work on fixing those issues. Writing a key-value store which can provide the same performance as RocksDB is a huge undertaking, not worth our effort. Even my team wasn’t sure. I had my doubts as well.


I have great respect for any piece of technology which has been iterated upon by the smartest engineers on the face of the planet for years. RocksDB is that. And if I was writing Dgraph in C++, I’d happily use it.



But, I just hate ugly code.



And I hate recurring bugs. No amount of effort would have ensured that we would no longer have any more issues with using RocksDB via Cgo. I wanted a clean slate, and my profiler tools back. Building a key-value store in Go from scratch was the only way to achieve it.


I looked around. The existing key-value stores written in Go didn’t even come close to RocksDB’s performance. And that’s a deal breaker. You don’t trade performance for cleanliness. You demand both.


So, I decided we will replace our dependency on RocksDB, but given this isn’t a priority for Dgraph, none of the team members should work on it. This would be a side project that only I will undertake. I started reading up about B+ and LSM trees, recent improvements to their design, and came across WiscKey paper. It had great promising ideas. I decided to spend a month away from core Dgraph, building Badger.


That’s not how it went. I couldn’t spend a month away from Dgraph. Between all the founder duties, I couldn’t fully dedicate time to coding either. Badger developed during my spurts of coding activity, and one of the team members’ part-time contributions. Work started end January, and now I think it’s in a good state to be trialed by the Go community.


LSM trees


Before we delve into Badger, let’s understand key-value store designs. They play an important role in data-intensive applications including databases. Key-value stores allow efficient updates, point lookups and range queries.


There are two popular types of implementations: Log-structured merge (LSM) tree based, and B+ tree based. The main advantage LSM trees have is that all the foreground writes happen in memory, and all background writes maintain sequential access patterns. Thus they achieve a very high write thoughput. On the other hand, small updates on B+-trees involve repeated random disk writes, and hence are unable to maintain high throughput write workload1.


To deliver high write performance, LSM-trees batch key-value pairs and write them sequentially. Then, to enable efficient lookups, LSM-trees continuously read, sort and write key-value pairs in the background. This is known as a compaction. LSM-trees do this over many levels, each level holding a factor more data than the previous, typically size of Li+1 = 10 x size of Li.


Within a single level, the key-values get written into files of fixed size, in a sorted order. Except level zero, all other levels have zero overlaps between keys stored in files at the same level.


Each level has a maximum capacity. As a level Li fills up, its data gets merged with data from lower level Li+1 and files in Li deleted to make space for more incoming data. As data flows from level zero to level one, two, and so on, the same data is re-written multiple times throughout its lifetime. Each key update causes many writes until data eventually settles. This constitutes write amplification. For a 7 level LSM tree, with 10x size increase factor, this can be 60; 10 for each transition from L1->L2, L2->L3, and so on, ignoring L0 due to special handling.


Conversely, to read a key from LSM tree, all the levels need to be checked. If present in multiple levels, the version of key at level closer to zero is picked (this version is more up to date). Thus, a single key lookup causes many reads over files, this constitutes read amplification. WiscKey paper estimates this to be 336 for a 1-KB key-value pair.


LSMs were designed around hard drives. In HDDs, random I/Os are over 100x slower than sequential ones. Thus, running compactions to continually sort keys and enable efficient lookups is an excellent trade-off.


NVMe SSD Samsung 960 pro


However, SSDs are fundamentally different from HDDs. The difference between their sequential and random reads are not nearly as large as HDDs. In fact, top of the line SSDs like Samsung 960 Pro can provide 440K random read operations per second, with 4KB block size. Thus, an LSM-tree that performs a large number of sequential writes to reduce later random reads is wasting bandwidth needlessly.


Badger


Badger is a simple, efficient, and persistent key-value store. Inspired by the simplicity of LevelDB, it provides Get, Set, Delete, and Iterate functions. On top of it, it adds CompareAndSet and CompareAndDelete atomic operations (see GoDoc). It does not aim to be a database and hence does not provide transactions, versioning or snapshots. Those things can be easily built on top of Badger.


Badger separates keys from values. The keys are stored in LSM tree, while the values are stored in a write-ahead log called the value log. Keys tend to be smaller than values. Thus this set up produces much smaller LSM trees. When required, the values are directly read from the log stored on SSD, utilizing its vastly superior random read performance.


Guiding principles


These are the guiding principles that decide the design, what goes in and what doesn’t in Badger.



  • Write it purely in Go language.

  • Use the latest research to build the fastest key-value store.

  • Keep it simple, stupid.

  • SSD-centric design.


Key-Value separation


The major performance cost of LSM-trees is the compaction process. During compactions, multiple files are read into memory, sorted, and written back. Sorting is essential for efficient retrieval, for both key lookups and range iterations. With sorting, the key lookups would only require accessing at most one file per level (excluding level zero, where we’d need to check all the files). Iterations would result in sequential access to multiple files.


Each file is of fixed size, to enhance caching. Values tend to be larger than keys. When you store values along with the keys, the amount of data that needs to be compacted grows significantly.


In Badger, only a pointer to the value in the value log is stored alongside the key. Badger employs delta encoding for keys to reduce the effective size even further. Assuming 16 bytes per key and 16 bytes per value pointer, a single 64MB file can store two million key-value pairs.


Write Amplification


Thus, the LSM tree generated by Badger is much smaller than that of RocksDB. This smaller LSM-tree reduces the number of levels, and hence number of compactions required to achieve stability. Also, values are not moved along with keys, because they’re elsewhere in value log. Assuming 1KB value and 16 byte keys, the effective write amplification per level is (10*16 + 1024)/(16 + 1024) ~ 1.14, a much smaller fraction.


You can see the performance gains of this approach compared to RocksDB as the value size increases; where loading data to Badger takes factors less time (see Benchmarks below).


Read Amplification


As mentioned above, the size of LSM tree generated by Badger is much smaller. Each file at each level stores lots more keys than typical LSM trees. Thus, for the same amount of data, fewer levels get filled up. A typical key lookup requires reading all files in level zero, and one file per level from level one and onwards. With Badger, filling fewer levels means, fewer files need to be read to lookup a key. Once key (along with value pointer) is fetched, the value can be fetched by doing random read in value log stored on SSD.


Furthermore, during benchmarking, we found that Badger’s LSM tree is so small, it can easily fit in RAM. For 1KB values and 75 million 22 byte keys, the raw size of the entire dataset is 72 GB. Badger’s LSM tree size for this setup is a mere 1.7G, which can easily fit into RAM. This is what causes Badger’s random key lookup performance to be at least 3.5x faster, and Badger’s key-only iteration to be blazingly faster than RocksDB.


Crash resilience


LSM trees write all the updates in memory first in memtables. Once they fill up, memtables get swapped over to immutable memtables, which eventually get written out to files in level zero on disk.


In the case of a crash, all the recent updates still in memory tables would be lost. Key-value stores deal with this issue, by first writing all the updates in a write-ahead log. Badger has a write-ahead log, it’s called value log.


Just like a typical write-ahead log, before any update is applied to LSM tree, it gets written to value log first. In the case of a crash, Badger would iterate over the recent updates in value log, and apply them back to the LSM tree.


Instead of iterating over the entire value log, Badger puts a pointer to the latest value in each memtable. Effectively, the latest memtable which made its way to disk would have a value pointer, before which all the updates have already made their way to disk. Thus, we can replay from this pointer onwards, and reapply all the updates to LSM tree to get all our updates back.


Overall size


RocksDB applies block compression to reduce the size of LSM tree. Badger’s LSM tree is much smaller in comparison and can be stored in RAM entirely, so it doesn’t need to do any compression on the tree. However, the size of value log can grow quite quickly. Each update is a new entry in the value log, and therefore multiple updates for the same key take up space multiple times.


To deal with this, Badger does two things. It allows compressing values in value log. Instead of compressing multiple key-values together, we only compress each key-value individually. This provides the best possible random read performance. The client can set it so compression is only done if the key-value size is over an adjustable threshold, set by default to 1KB.


Secondly, Badger runs value garbage collection. This runs periodically and samples a 100MB size of a randomly selected value log file. It checks if at least a significant chunk of it should be discarded, due to newer updates in later logs. If so, the valid key-value pairs would be appended to the log, the older file discarded, and the value pointers updated in the LSM tree. The downside is, this adds more work for LSM tree; so shouldn’t be run when loading a huge data set. More work is required to only trigger this garbage collection to run during periods of little client activity.


Hardware Costs


But, given the fact that SSDs are getting cheaper and cheaper, using extra space in SSD is almost nothing compared to having to store and serve a major chunk of LSM tree from memory. Consider this:


For 1KB values, 75 million 16 byte keys, RocksDB’s LSM tree is 50GB in size. Badger’s value log is 74GB (without value compression), and LSM tree is 1.7GB. Extrapolating it three times, we get 225 million keys, RocksDB size of 150GB and Badger size of 222GB value log, and 5.1GB LSM tree.


Using Amazon AWS US East (Ohio) datacenter:



  • To achieve a random read performance equivalent of Badger (at least 3.5x faster), RocksDB would need to be run on an r3.4xlarge instance, which provides 122 GB of RAM for $1.33 per hour; so most of its LSM tree can fit into memory.

  • Badger can be run on the cheapest storage optimized instance i3.large, which provides 475GB NVMe SSD (fio test: 100K IOPS for 4KB block size), with 15.25GB RAM for $0.156 per hour.

  • The cost of running Badger is thus, 8.5x cheaper than running RocksDB on EC2, on-demand.

  • Going 1-year term all upfront payment, this is $6182 for RocksDB v/s $870 for Badger, still 7.1x cheaper. That’s a whopping 86% saving.


Benchmarks


Setup


We rented a storage optimized i3.large instance from Amazon AWS, which provides 450GB NVMe SSD storage, 2 virtual cores along with 15.25GB RAM. This instance provides local SSD, which we tested via fio to sustain close to 100K random read IOPS for 4KB block sizes.


The data sets were chosen to generate sizes too big to fit entirely in RAM, in either RocksDB or Badger.




















































Value size Number of keys (each key = 22B) Raw data size
128B 250M 35GB
1024B 75M 73GB
16KB 5M 76GB

We then loaded data one by one, first in RocksDB then in Badger, never running the loaders concurrently. This gave us the data loading times and output sizes. For random Get and Iterate, we used Go benchmark tests and ran them for 3 minutes, going down to 1 minute for 16KB values.


All the code for benchmarking is available in this repo. All the commands ran and their measurements recorded are available in this log file. The charts and their data is viewable here.


Results


In the following benchmarks, we measured 4 things:



  • Data loading performance

  • Output size

  • Random key lookup performance (Get)

  • Sorted range iteration performance (Iterate)


All the 4 measurements are visualized in the following charts. [Badger](<a href=https://github.com/dgraph-io/badger) benchmarks" />


Data loading performance: Badger’s key-value separation design shows huge performance gains as value sizes increase. For value sizes of 1KB and 16KB, Badger achieves 4.5x and 11.7x more throughput than RocksDB. For smaller values, like 16 bytes not shown here, Badger can be 2-3x slower, due to slower compactions (see further work).


Store size: Badger generates much smaller LSM tree, but a larger value size log. The size of Badger’s LSM tree is proportional only to the number of keys, not values. Thus, Badger’s LSM tree decreases in size as we progress from 128B to 16KB. In all three scenarios, Badger produced an LSM tree which could fit entirely in RAM of the target server.


Random read latency: Badger’s Get latency is only 18% to 27% of RocksDB’s Get latency. In our opinion, this is the biggest win of this design. This happens because Badger’s entire LSM tree can fit into RAM, significantly decreasing the amount of time it takes to find the right tables, check their bloom filters, pick the right blocks and retrieve the key. Value retrieval is then a single SSD file.pread away.


In contrast, RocksDB can’t fit the entire tree in memory. Even assuming it can keep the table index and bloom filters in memory, it would need to fetch the entire blocks from disk, decompress them, then do key-value retrieval (Badger’s smaller LSM tree avoids the need for compression). This obviously takes longer, and given lack of data access locality, caching isn’t as effective.


Range iteration latency: Badger’s range iteration is significantly slower than RocksDB’s range iteration, when values are also retrieved from SSD. We didn’t expect this, and still don’t quite understand it. We expected some slowdown due to the need to do IOPS on SSD, while RocksDB does purely serial reads. But, given the 100K IOPS i3.large instance is capable of, we didn’t even come close to using that bandwidth, despite pre-fetching. This needs further work and investigation.


On the other end of the spectrum, Badger’s key-only iteration is blazingly faster than RocksDB or key-value iteration (latency is shown by the almost invisible red bar). This is quite useful in certain use cases we have at Dgraph, where we iterate over the keys, run filters and only retrieve values for a much smaller subset of keys.


Further work


Speed of range iteration


While Badger can do key-only iteration blazingly fast, things slow down when it also needs to do value lookups. Theoretically, this shouldn’t be the case. Amazon’s i3.large disk optimized instance can do 100,000 4KB block random reads per second. Based on this, we should be able to iterate 100K key-value pairs per second, in other terms six million key-value pairs per minute.


However, Badger’s current implementation doesn’t produce SSD random read requests even close to this limit, and the key-value iteration suffers as a result. There’s a lot of room for optimization in this space.


Speed of compactions


Badger is currently slower when it comes to running compactions compared to RocksDB. Due to this, for a dataset purely containing smaller values, it is slower to load data to Badger. This needs more optimization.


LSM tree compression


Again in a dataset purely containing smaller values, the size of LSM tree would be significantly larger than RocksDB because Badger doesn’t run compression on LSM tree. This should be easy to add on if needed, and would make a great first-time contributor project.


B+ tree approach


1 Recent improvements to SSDs might make B+-trees a viable option. Since WiscKey paper was written, SSDs have made huge gains in random write performance. A new interesting direction would be to combine the value log approach, and keep only keys and value pointers in the B+-tree. This would trade LSM tree read-sort-merge sequential write compactions with many random writes per key update and might achieve the same write throughput as LSM for a much simpler design.


Conclusion


We have built an efficient key-value store, which can compete in performance against top of the line key-value stores in market. It is currently rough around the edges, but provides a solid platform for any industrial application, be it data storage or building another database.


We will be replacing Dgraph’s dependency on RocksDB soon with Badger; making our builds easier, faster, making Dgraph cross-platform and paving the way for embeddable Dgraph. The biggest win of using Badger is a performant Go native key-value store. The nice side-effects are ~4 times faster Get and a potential 86% reduction in AWS bills, due to less reliance on RAM and more reliance on ever faster and cheaper SSDs.


So try out Badger in your project, and let us know your experience.


P.S. Special thanks to Sanjay Ghemawat and Lanyue Lu for responding to my questions about design choices.






**We are building an open source, real time, horizontally scalable and distributed graph database.**









































Get started with Dgraph. [https://docs.dgraph.io](https://docs.dgraph.io)
See our live demo. [https://dgraph.io](https://dgraph.io)
Star us on Github. [https://github.com/dgraph-io/dgraph](https://github.com/dgraph-io/dgraph)
Ask us questions. [https://discuss.dgraph.io](https://discuss.dgraph.io)


**We're starting to support enterprises in deploying Dgraph in production. [Talk to us](manish@dgraph.io), if you want us to help you try out Dgraph at your organization.**




*Top image: Juno spacecraft is the [fastest moving human made object](http://www.livescience.com/326 ... r.html), traveling at a speed of 265,00 kmph relative to Earth.*

Golang web starter

dasheng 发表了文章 • 2 个评论 • 390 次浏览 • 2017-07-30 00:13 • 来自相关话题

背景

Web应用长期以来是Ruby、Java、PHP等开发语言的战场。

  • Ruby可以实现快速原型开发,Ruby On Rails “全能”框架实现“全栈”开发,缺点有大型应用性能差、调试困难;
  • 查看全部

背景


Web应用长期以来是Ruby、Java、PHP等开发语言的战场。



  • Ruby可以实现快速原型开发,Ruby On Rails “全能”框架实现“全栈”开发,缺点有大型应用性能差、调试困难;

  • Java 20多年的发展历程,各种第三方库、框架健全,运行效率高,但是随着应用的功能膨胀,臃肿的get/set方法,JVM占用大量计算机资源、性能调试困难,函数式编程不友好。

  • PHP,TL;DR


本文实现了一个最小化web应用,以此来了解Golang web的生态,通过使用Docker隔离开发环境,使用Posgres持久化数据,源代码请参考这里


Why Go?



  • 性能优越

  • 部署简单,只需要将打包好的二进制文件部署到服务器上

  • 内置丰富的标准库,让程序员的生活变得简单美好

  • 静态语言,类型检查

  • duck typing

  • goroutine将开发人员从并发编程中解放出来

  • 函数作为“一等公民”

  • ...


Golang第三方框架选择



  • Web框架: Gin,性能卓越,API友好,功能完善

  • ORM: GORM,支持多种主流数据库方言,文档清晰

  • 包管理工具: Glide,类似于Ruby的bundler或者NodeJS中的npm

  • 测试工具:

    • GoConvey,符合BDD测试风格,支持浏览器测试结果的可视化

    • Testify,提供丰富的断言和Mock功能


  • 数据库migration: migrate

  • 日志工具: Logrus,结构化日志输出,完全兼容标准库的logger


Dockerize开发环境


发布应用base image


Dockerfile如下:


FROM golang:1.8

# 包管理工具
RUN curl https://glide.sh/get | sh

# 代码热加载
RUN go get github.com/codegangsta/gin

# 数据库migration工具
RUN go get -u -d github.com/mattes/migrate/cli github.com/lib/pq
RUN go build -tags 'postgres' -o /usr/local/bin/migrate github.com/mattes/migrate/cli

发布数据库base image


Dockerfile如下:


FROM postgres:9.6

# 初始化数据库配置
COPY ./init-user-db.sh /docker-entrypoint-initdb.d/init-user-db.sh

启动服务


运行auto/dev即可启动,具体的配置如下。



  • docker-compose.yml:


version: "3"

services:
dev:
links:
- db
image: 415148673/golang-web-base-image@sha256:18de5eb058a54b64f32d58b57a1eb3009b9ed49d90bd53056b95c5c8d5894cd6
environment:
- PORT=8080
- DB_USER=docker
- DB_HOST=db
- DB_NAME=webstarter
volumes:
- .:/go/src/golang-web-starter
working_dir: /go/src/golang-web-starter
ports:
- "3000:3000"
command: gin

db:
image: 415148673/postgres@sha256:6d4800c53e68576e05d3a61f2b62ed573f40692bcc72a3ebef3b04b3986bb70c
volumes:
- go-web-starter-db-cache:/var/lib/postgresql/data

volumes:
go-web-starter-db-cache:


  • 安装第三方依赖所需的glide配置文件,通过在容器内运行glide install进行安装:


package: golang-web-starter
import:
- package: github.com/gin-gonic/gin
version: ^1.1.4
- package: github.com/jinzhu/gorm
version: ^1.0.0
- package: github.com/mattes/migrate
version: ^3.0.1
- package: github.com/lib/pq
- package: github.com/stretchr/testify
version: ^1.1.4
- package: github.com/smartystreets/goconvey
version: ^1.6.2


  • 数据库migration的脚本:


migrate -source file://migrations -database "postgres://$DB_USER:$DB_PASSWORD@$DB_HOST:5432/$DB_NAME?sslmode=disable" up

业务实现


Router


router := gin.Default()
router.GET("/", handler.ShowIndexPage) // 显示主页面
router.GET("/book/:book_id", handler.GetBook) // 通过id查询书籍
router.POST("/book", handler.SaveBook) // 保存书籍

handler


以保存书籍为例:


func SaveBook(c *gin.Context)  {
var book models.Book
if err := c.Bind(&book); err == nil {
// 调用model的保存方法
book := models.SaveBook(book)

// 绑定前端页面所需数据
utility.Render(
c,
gin.H{
"title": "Save",
"payload": book,
},
"success.html",
)
} else {
// 异常处理
c.AbortWithError(http.StatusBadRequest, err)
}
}

model


func SaveBook(book Book) Book {
// 持久化数据
utility.DB().Create(&book)
return book;
}

建立DB连接


func DB() *gorm.DB {
dbInfo := fmt.Sprintf(
"host=%s user=%s dbname=%s sslmode=disable password=%s",
os.Getenv("DB_HOST"),
os.Getenv("DB_USER"),
os.Getenv("DB_NAME"),
os.Getenv("DB_PASSWORD"),
)
db, err := gorm.Open("postgres", dbInfo)
if err != nil {
log.Fatal(err)
}
return db
}

View


<body class="container">
{{ template "menu.html" . }}
<label>保存成功</label>
<h1>{{.payload.Title}}</h1>
<p>{{.payload.Author}}</p>
{{ template "footer.html" .}}
</body>

测试


func TestSaveBook(t *testing.T) {
r := utility.GetRouter(true)
r.POST("/book", SaveBook)

Convey("The params can not convert to model book", t, func() {
req, _ := http.NewRequest("POST", "/book", nil)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")

utility.TestHTTPResponse(r, req, func(w *httptest.ResponseRecorder) {
So(w.Code, ShouldEqual, http.StatusBadRequest)
})
})

Convey("The params can convert to model book", t, func() {
req, _ := http.NewRequest("POST", "/book", strings.NewReader("title=Hello world&author=will"))
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
utility.TestHTTPResponse(r, req, func(w *httptest.ResponseRecorder) {
p, _ := ioutil.ReadAll(w.Body)
So(w.Code, ShouldEqual, http.StatusOK)
So(string(p), ShouldContainSubstring, "保存成功")
})
})
}

总结


Go生态之活跃令我大开眼界,著名的应用如ocker, Ethereum都是使用Go编写的。使用Go进行web开发的过程,感觉和搭积木一样,一个合适的第三方库需要在多个候选库中精心筛选,众多的开源作者共同构建了一个“模块”王国。在这样的环境中,编程变成了一件很自由的事情。由于Go的标准库提供了很多内置的实用命令如go fmt,go test,让编程变得异常轻松,简直是强迫型程序员的“天堂”。
当然Go语言还处在发展过程中,也有许多不完善的地方,比如



  • 缺少标准的依赖管理工具(正在开发的dep

  • 非中心化的依赖仓库会出现由于某个依赖被删除导致应用不可用等。


欢迎关注我的微信公众平台,更多随笔随后更新:
whisperd

用go 简单实现的LRU

lys86_1205 发表了文章 • 0 个评论 • 257 次浏览 • 2017-07-27 14:02 • 来自相关话题

LRU

LRU

LiteIDE X32.2 发布,Go 语言开发工具

visualfc 发表了文章 • 1 个评论 • 299 次浏览 • 2017-07-19 08:21 • 来自相关话题

Go 语言开发工具 LiteIDE X32.2 正式发布,这个版本解决了编辑器监控外部文件多次修改后监控失效的问题,调试插件启用了保存断点功能,修复了调试测试用例功能,修复了 Dlv 调试进程工作路径问题。

Go 语言开发工具 LiteIDE X32.2 正式发布,这个版本解决了编辑器监控外部文件多次修改后监控失效的问题,调试插件启用了保存断点功能,修复了调试测试用例功能,修复了 Dlv 调试进程工作路径问题。





2017.7.18 Ver X32.2



  • LiteApp

    • fix editor file watcher is invalid for many change


  • GolangEdit

    • fix TODO/BUG/FIXME comment syntax


  • DlvDebugger

    • fix dlv headless process workdir


  • LiteDebug

    • fix debug tests action

    • fix load and save breakpoint for editor


斗鱼弹幕获取

songtianyi 发表了文章 • 3 个评论 • 362 次浏览 • 2017-07-14 23:09 • 来自相关话题

barrage

各平台弹幕协议和开放平台API github

支持列表

barrage


各平台弹幕协议和开放平台API
github


支持列表



  • douyu.com

  • bilibili.com


例子



  • douyu


package main

import (
"fmt"
"github.com/songtianyi/barrage/douyu"
"github.com/songtianyi/rrframework/logs"
)

func chatmsg(msg *douyu.Message) {
level := msg.GetStringField("level")
nn := msg.GetStringField("nn")
txt := msg.GetStringField("txt")
logs.Info(fmt.Sprintf("level(%s) - %s >>> %s", level, nn, txt))
}

func main() {
client, err := douyu.Connect("openbarrage.douyutv.com:8601", nil)
if err != nil {
logs.Error(err)
return
}

client.HandlerRegister.Add("chatmsg", douyu.Handler(chatmsg), "chatmsg")
if err := client.JoinRoom(288016); err != nil {
logs.Error(fmt.Sprintf("Join room fail, %s", err.Error()))
return
}
client.Serve()
}

demo


douyu-barrage-demo

SpaceVim - 终端下最好用的 IDE

SpaceVim 发表了文章 • 5 个评论 • 408 次浏览 • 2017-07-11 09:39 • 来自相关话题

SpaceVim 中文手册

查看全部

SpaceVim 中文手册


Build Status
Version 0.2.0-dev
[MIT License]()
Doc
QQ
Gitter
Facebook


GitHub watchers
GitHub stars
GitHub forks
Twitter Follow


2017-04-29-20 54 49


项 目 主 页: spacevim.org


Github 地址 : SpaceVim GitHub, 欢迎Star或fork。


SpaceVim 是一个社区驱动的模块化 vim/neovim 配置集合,其中包含了多种功能模块,并且针对 neovim 做了功能优化。spacevim 有多种功能模块可供选择,用户只需要选择需要的模块,就可以配置出一个适合自己的开发环境。


使用过程中遇到问题或者有什么功能需求可以在 github 提交 issue,这将更容易被关注和修复。我们也欢迎喜欢 vim/neovim 的用户加入我们的 QQ 群,一起讨论 vim 相关的技巧,点击加入Vim/SpaceVim用户群


以下是近几周的开发汇总:


Throughput Graph


目录
安装
更新
特性
用户配置


安装


Linux 或 Mac 下 SpaceVim的安装非常简单,只需要执行以下命令即可:


curl -sLf https://spacevim.org/install.sh | bash

想要获取更多的自定义的安装方式,请参考:


curl -sLf https://spacevim.org/install.sh | bash -s -- -h

SpaceVim是一种模块化配置,可以运行在vim或者neovim上,关于vim以及neovim的安装,请参考以下链接:


安装neovim


从源码编译vim


windows系统下的安装步骤:


Windows 下 vim 用户只需要将本仓库克隆成用户 HOME 目录下的 vimfiles 即可,打开 CMD 默认的目录默认即为 HOME 目录,只需要执行如下命令即可:


git clone https://github.com/SpaceVim/SpaceVim.git vimfiles

Windows 下 neovim 用户 需要将本仓库克隆成用户 HOME 目录下的 AppData\Local\nvim,想要获取跟多关于 neovim 安装相关的知识,可以访问 neovim 的 wiki, wiki 写的非常详细。打开 CMD 初始目录默认一般即为 HOME 目录,只需要执行如下命令即可:


git clone https://github.com/SpaceVim/SpaceVim.git AppData\Local\nvim

字体


SpaceVim 默认启用了Powerline 字体,默认的的字体文件是:DejaVu Sans Mono, Windows 用户直接下载下来右键安装即可。


vimproc.dll


Windows 下用户如果不方便编译,可以在qq群文件里面下载相应的dll文件放到vimproc
的lib目录,默认是 ~/.cache/vimfiles/repos/github.com/Shougo/vimproc.vim/lib/


特性


优雅的界面


SpaceVim 的默认界包括 tagbar 、vimfiler 、以及 airline 界面,配色主题采用的 gruvbox。


UI


Unite为主的工作平台


Unite 的快捷键前缀是f, 可以通过 g:spacevim_unite_leader 来设定,快捷键无需记忆,SpaceVim 有很好的快捷键辅助机制,如下是 Unite 的快捷键键图:


unite


自动补全


SpaceVim 采用最快补全引擎 deoplete, 该引擎不同与YouCompleteMe的主要一点是支持多源补全,而不单单是语义补全。 而且补全来源拓展非常方便。


细致的tags管理


用户配置


SpaceVim 将从 ~/.SpaceVim.d/init.vim 和当前目录的 ./SpaceVim.d/init.vim 载入配置,并且更新 rtp,用户可以在 ~/.SpaceVim.d/ 和 .SpaceVim.d/ 这两个文件夹下编辑自己的脚本,和 SpaceVim 的配置文件。


示例:


" Here are some basic customizations,
" please refer to the ~/.SpaceVim.d/init.vim
" file for all possible options:
let g:spacevim_default_indent = 3
let g:spacevim_max_column = 80

" Change the default directory where all miscellaneous persistent files go.
" By default it is ~/.cache/vimfiles/.
let g:spacevim_plugin_bundle_dir = '~/.cache/vimfiles/'

" set SpaceVim colorscheme
let g:spacevim_colorscheme = 'jellybeans'

" Set plugin manager, you want to use, default is dein.vim
let g:spacevim_plugin_manager = 'dein' " neobundle or dein or vim-plug

" use space as `<Leader>`
let mapleader = "\<space>"

" Set windows shortcut leader [Window], default is `s`
let g:spacevim_windows_leader = 's'

" Set unite work flow shortcut leader [Unite], default is `f`
let g:spacevim_unite_leader = 'f'

" By default, language specific plugins are not loaded. This can be changed
" with the following, then the plugins for go development will be loaded.
call SpaceVim#layers#load('lang#go')

" loaded ui layer
call SpaceVim#layers#load('ui')

" If there is a particular plugin you don't like, you can define this
" variable to disable them entirely:
let g:spacevim_disabled_plugins=[
\ ['junegunn/fzf.vim'],
\ ]

" If you want to add some custom plugins, use these options:
let g:spacevim_custom_plugins = [
\ ['plasticboy/vim-markdown', {'on_ft' : 'markdown'}],
\ ['wsdjeg/GitHub.vim'],
\ ]

" set the guifont
let g:spacevim_guifont = 'DejaVu\ Sans\ Mono\ for\ Powerline\ 11'

SpaceVim选项

























































选项名称 默认值 描述
g:spacevim_default_indent 2 对齐空格
g:spacevim_enable_guicolors 1 启用/禁用终端使用真色彩
g:spacevim_windows_leader s 窗口管理快捷键前缀
g:spacevim_unite_leader f Unite快捷键前缀
g:spacevim_plugin_bundle_dir ~/.cache/vimfiles 默认插件缓存位置
g:spacevim_realtime_leader_guide 0 启用/禁用实时快捷键提示
g:spacevim_guifont '' 设置SpaceVim字体
g:spacevim_sidebar_width 30 设置边栏宽度,文件树以及语法树
g:spacevim_custom_plugins [] 设置自定义插件

LiteIDE X32.1 发布,Go 语言开发工具

visualfc 发表了文章 • 2 个评论 • 229 次浏览 • 2017-07-11 08:58 • 来自相关话题

Go 语言开发工具 LiteIDE X32.1 正式发布。

新版本修复了 X32 版本的一些错误,优化了环境设置的加载;项目自定义 GOPATH 设置实现了子目录自动继承上级目录设置;Gocode 代码自动完成插件也已支持项目的自定义 GO... 查看全部

Go 语言开发工具 LiteIDE X32.1 正式发布。


新版本修复了 X32 版本的一些错误,优化了环境设置的加载;项目自定义 GOPATH 设置实现了子目录自动继承上级目录设置;Gocode 代码自动完成插件也已支持项目的自定义 GOPATH 设置;Dlv 调试插件启用了服务器模式(dlv headless mode),实现了应用输出和调试信息输出的分离。





更新记录 2017.7.7 Ver X32.1



  • LiteIDE

    • build config custom gopath support inherit parent path's gopath setup


  • GolangCode

    • update gocode lib-path by build config custom gopath


  • LiteEnv

    • optimization check go enviroment


  • LiteBuild

    • build config custom gopath inherit parent path

    • fix BuildAndRun kill old on window

    • fix build config custom gopath action


  • GolangPackage

    • fix load package treeview error


  • DlvDebugger

    • dlv use headless mode

    • fix dlv kill process


对于 Go 中的实用函数我有话说

taowen 发表了文章 • 0 个评论 • 428 次浏览 • 2017-07-09 23:13 • 来自相关话题

目标:让 Go 中支持类似下面这样的函数

func Max(collection ...interface{}) interf... 			查看全部
					

目标:让 Go 中支持类似下面这样的函数


func Max(collection ...interface{}) interface{}

问题:如果用反射实现的话,效率是问题。


解决方案:json.Unmarshal 就是用反射实现的,jsoniter 通过用 unsafe.Pointer 加上缓存的 decoder 实现了6倍的速度提升。所以尝试用同样的技术原理,写一个概念验证的原型 https://github.com/v2pro/wombat


实现的API类似这样


import (
"testing"
"github.com/stretchr/testify/require"
"github.com/v2pro/plz"
)

func Test_max_min(t *testing.T) {
should := require.New(t)
should.Equal(3, plz.Max(1, 3, 2))
should.Equal(1, plz.Min(1, 3, 2))

type User struct {
Score int
}
should.Equal(User{3}, plz.Max(
User{1}, User{3}, User{2},
"Score"))
}

其中的原理是从 interface{} 中提取 unsafe.Pointer。 然后用 Accessor 获得具体的值。这个 Accessor 的概念是和类对应的,而不是和值对应的。也就是相当于 type.GetIntValue(interface{}) 这样的意思。这个在 Java 的反射 API 中是支持的,而 Go 没有提供这样的 API。利用 Accessor 我们可以一次性计算好整个任务,然后缓存起来。这样运行期的成本大概就是虚函数调用的成本。


Accessor 的接口定义


type Accessor interface {
// === static ===
fmt.GoStringer
Kind() Kind
// map
Key() Accessor
// array/map
Elem() Accessor
// struct
NumField() int
Field(index int) StructField
// array/struct
RandomAccessible() bool
New() (interface{}, Accessor)

// === runtime ===
IsNil(ptr unsafe.Pointer) bool
// variant
VariantElem(ptr unsafe.Pointer) (elem unsafe.Pointer, elemAccessor Accessor)
InitVariant(ptr unsafe.Pointer, template Accessor) (elem unsafe.Pointer, elemAccessor Accessor)
// map
MapIndex(ptr unsafe.Pointer, key unsafe.Pointer) (elem unsafe.Pointer) // only when random accessible
SetMapIndex(ptr unsafe.Pointer, key unsafe.Pointer, elem unsafe.Pointer) // only when random accessible
IterateMap(ptr unsafe.Pointer, cb func(key unsafe.Pointer, elem unsafe.Pointer) bool)
FillMap(ptr unsafe.Pointer, cb func(filler MapFiller))
// array/struct
ArrayIndex(ptr unsafe.Pointer, index int) (elem unsafe.Pointer) // only when random accessible
IterateArray(ptr unsafe.Pointer, cb func(index int, elem unsafe.Pointer) bool)
FillArray(ptr unsafe.Pointer, cb func(filler ArrayFiller))
// primitives
Skip(ptr unsafe.Pointer) // when the value is not needed
String(ptr unsafe.Pointer) string
SetString(ptr unsafe.Pointer, val string)
Bool(ptr unsafe.Pointer) bool
SetBool(ptr unsafe.Pointer, val bool)
Int(ptr unsafe.Pointer) int
SetInt(ptr unsafe.Pointer, val int)
Int8(ptr unsafe.Pointer) int8
SetInt8(ptr unsafe.Pointer, val int8)
Int16(ptr unsafe.Pointer) int16
SetInt16(ptr unsafe.Pointer, val int16)
Int32(ptr unsafe.Pointer) int32
SetInt32(ptr unsafe.Pointer, val int32)
Int64(ptr unsafe.Pointer) int64
SetInt64(ptr unsafe.Pointer, val int64)
Uint(ptr unsafe.Pointer) uint
SetUint(ptr unsafe.Pointer, val uint)
Uint8(ptr unsafe.Pointer) uint8
SetUint8(ptr unsafe.Pointer, val uint8)
Uint16(ptr unsafe.Pointer) uint16
SetUint16(ptr unsafe.Pointer, val uint16)
Uint32(ptr unsafe.Pointer) uint32
SetUint32(ptr unsafe.Pointer, val uint32)
Uint64(ptr unsafe.Pointer) uint64
SetUint64(ptr unsafe.Pointer, val uint64)
Float32(ptr unsafe.Pointer) float32
SetFloat32(ptr unsafe.Pointer, val float32)
Float64(ptr unsafe.Pointer) float64
SetFloat64(ptr unsafe.Pointer, val float64)
}

利用这个 Accessor 可以干很多事情,除了各种函数式编程常用的utility(map/filter/sorted/...)之外。还可以实现一个 plz.Copy 的函数


func Copy(dst, src interface{}) error

Copy 可以用于各种对象绑定的场景



  • Go 不同类型对象之间的值拷贝(struct&map互相转换,兼容指针)

  • JSON 编解码

  • 拷贝 http.Request 到我的 struct 上

  • 拷贝 sql rows 到我的 struct 上

  • Mysql/thrift/redis 等其他协议的编解码


还可以用来实现 plz.Validate 的函数


func Validate(obj interface{}) error

甚至有可能的话,还可以把 .net 的 linq 的概念拿过来


func Query(obj interface{}, query string) (result interface{}, err error)

当然这个工作量非常浩大,比一个JSON解析库繁琐得多。现在只实现了几个概念原型:



用兴趣的朋友可以来发issue:https://github.com/v2pro/wombat/issues

百度AI服务go语言sdk

chenqinghe 发表了文章 • 0 个评论 • 399 次浏览 • 2017-07-07 19:07 • 来自相关话题

利用百度提供的REST API构建的Go语言sdk,目前有语音合成和语音识别功能,更多功能敬请期待。 项目地址:https://github....

Go 語言框架 Gin 終於發佈 v1.2 版本

appleboy 发表了文章 • 7 个评论 • 632 次浏览 • 2017-07-06 16:02 • 来自相关话题

本人轉錄自『Go 語言框架 Gin 終於發佈 v1.2 版本


19807878_1634683919888714_743883353_o


上週跟 Gin 作者 @javierprovecho 討論要發佈新版本,很快地經過一兩天,作者終於整理好 v1.2 版本,除了釋出新版本外,也換了有顏色的 Logo,真心覺得很好看。大家來看看 v1.2 釋出哪些功能,或修正哪些問題。


如何升級


首先來看看如何升級版本,建議還沒有用 vendor 工具的開發者,是時候該導入了。底下可以透過 govender 來升級 Gin 框架。


$ govendor fetch github.com/gin-gonic/gin@v1.2
$ govendor fetch github.com/gin-gonic/gin/render

由於我們新增 Template Func Maps,所以 render 套件也要一併升級喔。


從 godeps 轉換到 govender


Gin 專案本來是用 godeps,但是在套件處理上有些問題,所以我們決定換到穩定些的 govender,看看之後 Go 團隊開發的 dep 可不可以完全取代掉 govendor。


支援 Let's Encrypt


我另外開一個專案 autotls 讓 Gin 也可以支援 Let's Encrypt,這專案可以用在 net/http 套件上,所以基本上支援全部框架,除非搭建的 Http Server 不是用 net/http。使用方式很簡單,如下:


用一行讓 Web 支援 TLS


package main

import (
"log"

"github.com/gin-gonic/autotls"
"github.com/gin-gonic/gin"
)

func main() {
r := gin.Default()

// Ping handler
r.GET("/ping", func(c *gin.Context) {
c.String(200, "pong")
})

log.Fatal(autotls.Run(r, "example1.com", "example2.com"))
}

自己客製化 Auto TLS Manager


開發者可以將憑證存放在別的目錄,請修改 /var/www/.cache


package main

import (
"log"

"github.com/gin-gonic/autotls"
"github.com/gin-gonic/gin"
"golang.org/x/crypto/acme/autocert"
)

func main() {
r := gin.Default()

// Ping handler
r.GET("/ping", func(c *gin.Context) {
c.String(200, "pong")
})

m := autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist("example1.com", "example2.com"),
Cache: autocert.DirCache("/var/www/.cache"),
}

log.Fatal(autotls.RunWithManager(r, &m))
}

支援 Template Func 功能


首先讓開發者可以調整 template 分隔符號,原本是用 {{}},現在可以透過 Gin 來設定客製化符號。


    r := gin.Default()
r.Delims("{[{", "}]}")
r.LoadHTMLGlob("/path/to/templates"))

另外支援 Custom Template Funcs


    ...

func formatAsDate(t time.Time) string {
year, month, day := t.Date()
return fmt.Sprintf("%d/d/d", year, month, day)
}

...

router.SetFuncMap(template.FuncMap{
"formatAsDate": formatAsDate,
})

...

router.GET("/raw", func(c *Context) {
c.HTML(http.StatusOK, "raw.tmpl", map[string]interface{}{
"now": time.Date(2017, 07, 01, 0, 0, 0, 0, time.UTC),
})
})

...

打開 raw.tmpl 寫入


Date: {[{.now | formatAsDate}]}

執行結果:


Date: 2017/07/01

增加 Context 函式功能


在此版發佈前,最令人煩惱的就是 Bind Request Form 或 JSON 驗證,因為 Gin 會直接幫忙回傳 400 Bad Request,很多開發者希望可以自訂錯誤訊息,所以在 v1.2 我們將 BindWith 丟到 deprecated 檔案,並且打算在下一版正式移除。


// BindWith binds the passed struct pointer using the specified binding engine.
// See the binding package.
func (c *Context) BindWith(obj interface{}, b binding.Binding) error {
log.Println(`BindWith(\"interface{}, binding.Binding\") error is going to
be deprecated, please check issue #662 and either use MustBindWith() if you
want HTTP 400 to be automatically returned if any error occur, of use
ShouldBindWith() if you need to manage the error.`)
return c.MustBindWith(obj, b)
}

如果要自訂訊息,請用 ShouldBindWith


package main

import (
"github.com/gin-gonic/gin"
)

type LoginForm struct {
User string `form:"user" binding:"required"`
Password string `form:"password" binding:"required"`
}

func main() {
router := gin.Default()
router.POST("/login", func(c *gin.Context) {
// you can bind multipart form with explicit binding declaration:
// c.MustBindWith(&form, binding.Form)
// or you can simply use autobinding with Bind method:
var form LoginForm
// in this case proper binding will be automatically selected
if c.ShouldBindWith(&form) == nil {
if form.User == "user" && form.Password == "password" {
c.JSON(200, gin.H{"status": "you are logged in"})
} else {
c.JSON(401, gin.H{"status": "unauthorized"})
}
}
})
router.Run(":8080")
}

大致上是這些大修正,剩下的小功能或修正,請直接參考 v1.2 releases log

Go连接池

chrislee 发表了文章 • 4 个评论 • 422 次浏览 • 2017-07-04 11:21 • 来自相关话题

最近用Go写了两个连接池,goRpcPool查看全部

最近用Go写了两个连接池,goRpcPoolgoRedisPool
作为有几年C++经验的程序员,改用Go后开发效率提高了太多。现在老东家(某Iaas供应商)也开始从C++转向Go了,Go的招聘帖(不管是BAT还是创业公司)也越来越多,希望发展能越来越好。

RobotGo v0.45.0 发布, 增加进程管理和剪贴板

回复

veni 发起了问题 • 1 人关注 • 0 个回复 • 431 次浏览 • 2017-07-02 22:40 • 来自相关话题

Golang 中使用 JSON 的小技巧

taowen 发表了文章 • 3 个评论 • 1414 次浏览 • 2017-06-20 23:32 • 来自相关话题

有的时候上游传过来的字段是string类型的,但是我们却想用变成数字来使用。 本来用一个json:",string" 就可以支持了,如果不知道golang的这些小技巧,就要大费周章了。

参考文章:查看全部

有的时候上游传过来的字段是string类型的,但是我们却想用变成数字来使用。
本来用一个json:",string" 就可以支持了,如果不知道golang的这些小技巧,就要大费周章了。


参考文章:http://attilaolah.eu/2014/09/10/json-and-struct-composition-in-go/


临时忽略struct字段


type User struct {
Email string `json:"email"`
Password string `json:"password"`
// many more fields…
}

临时忽略掉Password字段


json.Marshal(struct {
*User
Password bool `json:"password,omitempty"`
}{
User: user,
})

临时添加额外的字段


type User struct {
Email string `json:"email"`
Password string `json:"password"`
// many more fields…
}

临时忽略掉Password字段,并且添加token字段


json.Marshal(struct {
*User
Token string `json:"token"`
Password bool `json:"password,omitempty"`
}{
User: user,
Token: token,
})

临时粘合两个struct


type BlogPost struct {
URL string `json:"url"`
Title string `json:"title"`
}

type Analytics struct {
Visitors int `json:"visitors"`
PageViews int `json:"page_views"`
}

json.Marshal(struct{
*BlogPost
*Analytics
}{post, analytics})

一个json切分成两个struct


json.Unmarshal([]byte(`{
"url": "attila@attilaolah.eu",
"title": "Attila's Blog",
"visitors": 6,
"page_views": 14
}`), &struct {
*BlogPost
*Analytics
}{&post, &analytics})

临时改名struct的字段


type CacheItem struct {
Key string `json:"key"`
MaxAge int `json:"cacheAge"`
Value Value `json:"cacheValue"`
}

json.Marshal(struct{
*CacheItem

// Omit bad keys
OmitMaxAge omit `json:"cacheAge,omitempty"`
OmitValue omit `json:"cacheValue,omitempty"`

// Add nice keys
MaxAge int `json:"max_age"`
Value *Value `json:"value"`
}{
CacheItem: item,

// Set the int by value:
MaxAge: item.MaxAge,

// Set the nested struct by reference, avoid making a copy:
Value: &item.Value,
})

用字符串传递数字


type TestObject struct {
Field1 int `json:",string"`
}

这个对应的json是 {"Field1": "100"}


如果json是 {"Field1": 100} 则会报错


容忍字符串和数字互转


如果你使用的是jsoniter,可以启动模糊模式来支持 PHP 传递过来的 JSON。


import "github.com/json-iterator/go/extra"

extra.RegisterFuzzyDecoders()

这样就可以处理字符串和数字类型不对的问题了。比如


var val string
jsoniter.UnmarshalFromString(`100`, &val)

又比如


var val float32
jsoniter.UnmarshalFromString(`"1.23"`, &val)

容忍空数组作为对象


PHP另外一个令人崩溃的地方是,如果 PHP array是空的时候,序列化出来是[]。但是不为空的时候,序列化出来的是{"key":"value"}
我们需要把 [] 当成 {} 处理。


如果你使用的是jsoniter,可以启动模糊模式来支持 PHP 传递过来的 JSON。


import "github.com/json-iterator/go/extra"

extra.RegisterFuzzyDecoders()

这样就可以支持了


var val map[string]interface{}
jsoniter.UnmarshalFromString(`[]`, &val)

使用 MarshalJSON支持time.Time


golang 默认会把 time.Time 用字符串方式序列化。如果我们想用其他方式表示 time.Time,需要自定义类型并定义 MarshalJSON。


type timeImplementedMarshaler time.Time

func (obj timeImplementedMarshaler) MarshalJSON() ([]byte, error) {
seconds := time.Time(obj).Unix()
return []byte(strconv.FormatInt(seconds, 10)), nil
}

序列化的时候会调用 MarshalJSON


type TestObject struct {
Field timeImplementedMarshaler
}
should := require.New(t)
val := timeImplementedMarshaler(time.Unix(123, 0))
obj := TestObject{val}
bytes, err := jsoniter.Marshal(obj)
should.Nil(err)
should.Equal(`{"Field":123}`, string(bytes))

使用 RegisterTypeEncoder支持time.Time


jsoniter 能够对不是你定义的type自定义JSON编解码方式。比如对于 time.Time 可以用 epoch int64 来序列化


import "github.com/json-iterator/go/extra"

extra.RegisterTimeAsInt64Codec(time.Microsecond)
output, err := jsoniter.Marshal(time.Unix(1, 1002))
should.Equal("1000001", string(output))

如果要自定义的话,参见 RegisterTimeAsInt64Codec 的实现代码


使用 MarshalText支持非字符串作为key的map


虽然 JSON 标准里只支持 string 作为 key 的 map。但是 golang 通过 MarshalText() 接口,使得其他类型也可以作为 map 的 key。例如


f, _, _ := big.ParseFloat("1", 10, 64, big.ToZero)
val := map[*big.Float]string{f: "2"}
str, err := MarshalToString(val)
should.Equal(`{"1":"2"}`, str)

其中 big.Float 就实现了 MarshalText()


使用 json.RawMessage


如果部分json文档没有标准格式,我们可以把原始的文本信息用string保存下来。


type TestObject struct {
Field1 string
Field2 json.RawMessage
}
var data TestObject
json.Unmarshal([]byte(`{"field1": "hello", "field2": [1,2,3]}`), &data)
should.Equal(` [1,2,3]`, string(data.Field2))

使用 json.Number


默认情况下,如果是 interface{} 对应数字的情况会是 float64 类型的。如果输入的数字比较大,这个表示会有损精度。所以可以 UseNumber() 启用 json.Number 来用字符串表示数字。


decoder1 := json.NewDecoder(bytes.NewBufferString(`123`))
decoder1.UseNumber()
var obj1 interface{}
decoder1.Decode(&obj1)
should.Equal(json.Number("123"), obj1)

jsoniter 支持标准库的这个用法。同时,扩展了行为使得 Unmarshal 也可以支持 UseNumber 了。


json := Config{UseNumber:true}.Froze()
var obj interface{}
json.UnmarshalFromString("123", &obj)
should.Equal(json.Number("123"), obj)

统一更改字段的命名风格


经常 JSON 里的字段名 Go 里的字段名是不一样的。我们可以用 field tag 来修改。


output, err := jsoniter.Marshal(struct {
UserName string `json:"user_name"`
FirstLanguage string `json:"first_language"`
}{
UserName: "taowen",
FirstLanguage: "Chinese",
})
should.Equal(`{"user_name":"taowen","first_language":"Chinese"}`, string(output))

但是一个个字段来设置,太麻烦了。如果使用 jsoniter,我们可以统一设置命名风格。


import "github.com/json-iterator/go/extra"

extra.SetNamingStrategy(LowerCaseWithUnderscores)
output, err := jsoniter.Marshal(struct {
UserName string
FirstLanguage string
}{
UserName: "taowen",
FirstLanguage: "Chinese",
})
should.Nil(err)
should.Equal(`{"user_name":"taowen","first_language":"Chinese"}`, string(output))

使用私有的字段


Go 的标准库只支持 public 的 field。jsoniter 额外支持了 private 的 field。需要使用 SupportPrivateFields() 来开启开关。


import "github.com/json-iterator/go/extra"

extra.SupportPrivateFields()
type TestObject struct {
field1 string
}
obj := TestObject{}
jsoniter.UnmarshalFromString(`{"field1":"Hello"}`, &obj)
should.Equal("Hello", obj.field1)

软文撰写不易,客官点个赞呗:https://github.com/json-iterator/go

wechat_pusher : 基于Golang开发的微信消息定时推送框架

HundredLee 发表了文章 • 2 个评论 • 439 次浏览 • 2017-06-14 12:55 • 来自相关话题

wechat_pusher


Github



  • https://github.com/hundredlee/wechat_pusher

  • 欢迎star && fork && watch

  • 学Golang不久,写一个开源练练手。希望大家提提建议。谢谢

    功能列表


  • 消息推送

    • 模板消息推送

      • model -> message.go

      • task -> template_task.go


    • 图片推送(TODO)

    • 文字推送(TODO)

    • 图文推送(TODO)


  • 日志存储

  • 计划任务


如何开始?


第一步:当然是go get




  • go get github.com/hundredlee/wechat_pusher.git



  • 项目结构如下:


├── README.md
├── config
│   └── config.go
├── config.conf
├── config.conf.example
├── enum
│   └── task_type.go
├── glide.lock
├── glide.yaml
├── hlog
│   ├── filelog.go
│   ├── filelog_test.go
│   └── hlog.go
├── main.go
├── main.go.example
├── models
│   ├── message.go
│   └── token.go
├── redis
│   ├── redis.go
│   └── redis_test.go
├── statics
│   └── global.go
├── task
│   ├── task.go
│   └── template_task.go
├── utils
│   ├── access_token.go
│   ├── crontab.go
│   └── push.go
└── vendor
└── github.com

第二步:创建一个项目


创建配置文件



  • 项目根目录有一个config.conf.example,重命名为config.conf即可

  • 内容如下:


[WeChat]
APPID=
SECRET=
TOKEN=

[Redis]
POOL_SIZE=
TIMEOUT=
HOST=
PASS=
DB=

[Log]
LOG_PATH=


  • WeChat部分

    • APPID && SECRET && TOKEN 这些是微信开发者必须了解的东西。不细讲


  • Redis部分

    • POOL_SIZE 连接池大小 ,整型 int

    • TIMEOUT 连接超时时间 ,整型 int

    • HOST 连接的IP 字符串 string

    • PASS 密码 字符串 string

    • DB 数据库选择 整型 int



  • Log部分



    • LOG_PATH 日志存放文件夹,例如值为wechat_log,那么完整的目录应该是 GOPATH/wechat_log



  • 调用的时候这么写:



conf := config.Instance()
//例如wechat 的 appid
appId := conf.ConMap["WeChat.APPID"]

模板怎么配置



  • 以模板消息作为例子说明:

  • message.go 是模板消息的结构

  • template_task.go 是将一个模板消息封装成任务(template_task.go 是实现了接口task.go的)


mess := models.Message{
ToUser: "openid",
TemplateId: "templateid",
Url: "url",
Data: models.Data{
First: models.Raw{"xxx", "#173177"},
Subject: models.Raw{"xxx", "#173177"},
Sender: models.Raw{"xxx", "#173177"},
Remark: models.Raw{"xxx", "#173177"}}}

//封装成一个任务,TemplateTask表示模板消息任务
task := task.TemplateTask{}
task.SetTask(mess)


  • 以上代码是模板消息的配置,这个微信开发者应该都能看懂。


如何创建一个任务



  • 例如我们要创建一个模板消息定时推送任务

    • 第一步,封装任务

    • 第二步,添加任务,并设置任务类型、并发执行的个数、失败尝试次数等。

    • 第三步,启动任务


  • 我们用示例代码演示整个完整的过程


package main

import (
"github.com/hundredlee/wechat_pusher/enum"
"github.com/hundredlee/wechat_pusher/models"
"github.com/hundredlee/wechat_pusher/task"
"github.com/hundredlee/wechat_pusher/utils"
"runtime"
)

func main() {

runtime.GOMAXPROCS(runtime.NumCPU())
var tasks []task.Task
tasks = make([]task.Task, 100)
mess := models.Message{
ToUser: "oBv9cuLU5zyI27CtzI4VhV6Xabms",
TemplateId: "UXb6s5dahNC5Zt-xQIxbLJG1BdP8mP73LGLhNXl68J8",
Url: "http://baidu.com",
Data: models.Data{
First: models.Raw{"xxx", "#173177"},
Subject: models.Raw{"xxx", "#173177"},
Sender: models.Raw{"xxx", "#173177"},
Remark: models.Raw{"xxx", "#173177"}}}
task := task.TemplateTask{}
task.SetTask(mess)

for i := 0; i < 100; i++ {
tasks[i] = &task
}

utils.NewPush(tasks).SetTaskType(enum.TASK_TYPE_TEMPLATE).SetRetries(4).SetBufferNum(10).Add("45 * * * * *")
utils.StartCron()

}


Run



  • 很简单,当你组装好所有的task以后,直接运行一句话就可以了。


  • utils.NewPush(tasks).SetTaskType(enum.TASK_TYPE_TEMPLATE).SetRetries(4).SetBufferNum(10).Add("45 * * * * *")



  • utils.StartCron()


Contributor


LiteIDE X32 发布,Go 语言开发工具

visualfc 发表了文章 • 2 个评论 • 391 次浏览 • 2017-06-13 11:44 • 来自相关话题

Go 语言开发工具 LiteIDE X32 正式发布。

历经三个月,200 多次源码提交,LiteIDE终于完成了新版本的发布,liteide.org 网站在 查看全部

Go 语言开发工具 LiteIDE X32 正式发布。


历经三个月,200 多次源码提交,LiteIDE终于完成了新版本的发布,liteide.org 网站在 HopeHook 的帮助下也正式推出。


LiteIDE X32 在界面会话、编译系统、源码编辑、代码分析等方面有了很大改进,从去年开始重写的 MulitFolderModel 也终于完成合并到 LiteIDE 的目录窗口中。



  • 提供了更多的界面主题和编辑器配色,感谢 HopeHook

  • 支持外部图标加载功能

  • 支持会话切换功能(会话保持自己的目录和文件)

  • 编译目录支持自定义 GOPATH

  • 编译目录支持更多的设置

  • 调试插件/Go编辑插件支持编译目录的 BUILDFLAGS -tags 设定

  • 完善 Go 代码导航和重构功能

  • 更多的功能更新和 BUG 修复见历史记录




更新记录 2017.6.12 Ver X32



  • LiteIDE

    • support folder build config custom GOPATH

    • support folder build config BUILDFLAGS -tags setup

    • support folder build config TARGETBASENAME setup

    • support session switching for folder/editor

    • support load custom icon library from liteapp/qrc folder (default and folder)

    • reimplemented multifolder model, it took me a long time :)

    • add macOS session menu for native dock menu

    • recent menu sync for multi windows

    • gotools support +build source navigate (single file or -tags setup)


  • LiteApp

    • add the session switching function

    • add autosavedocument emit message option

    • add max editor tab count option

    • add option action to standard toolbar

    • add tool window use shortcuts option for unstandard keyboard option

    • add exit liteide ctrl+q on windows

    • add themes (carbon.qss gray.qss sublime.qss) for liteide & beautify old themes, thanks for hope hook

    • editor tab context add open terminal here action

    • folders context menu add open in new windows action (new folder session)

    • folder view add show showdetails action

    • fix folder sync editor incorrect on macOS

    • fix webview and debug console qss

    • fix folders tool window enter key to jump

    • fix exit error save session by ctrl+q on macos

    • fix newfile dialog space name

    • update folder tool window showInExporer showInShell action text


  • LiteFind

    • find files add auto swith current folder checkbox

    • find in editor add show replace mode checkbox

    • filesearch enable replace whitespace or empty

    • editor replace all in one edit block for ctrl+z once undo


  • LiteBuild

    • add custom GOPATH in build config for build/debug/GolangEdit

    • add custom share-value BUILDFLAGS in build config for build/debug/GolangEdit

    • add custom TARGETBASENAME in build config for build/debug

    • support BUILDFLAGS -tags for build/debug/GolangEdit

    • update gosrc.xml to export custom value and share-value

    • folders tool window context menu add Go build configuration action

    • folders tool window context go tool use Go build configuration setup

    • fix stop action for kill process


  • LiteDebug

    • console use editor color scheme

    • support LiteBuild folder build config BUILDFLAGS/BUILDARGS -tags flag setup


  • DlvDebugger

    • fix process identify for auto exit


  • LiteEnv

    • default env /usr/local/go on macosx

    • update macosx cross env GOROOT for system


  • LiteEditor

    • context menu add convert case menu

    • go.snippet add iferr

    • update sublime.xml / sublime-bold.xml, thanks for hopehook hopehook@qq.com">hopehook@qq.com

    • alt+backspace delete serial whitespaces

    • option font QComboBox to QFontComboBox, add restore DefaultFont action

    • option add show monospace font check

    • option file types sort mimetype, show custom extsition first


  • GolangPackage

    • gopath setup add use sysgopath/litegopath check


  • GolangPlay

    • fix goplay use goenvironment


  • GolangDoc

    • change golang api index search for go/api folder


  • GolangEdit

    • add go root source readonly setup option

    • support folder go build config BUILDFLAGS/BUILDARGS -tags flag setup

    • fix interface type by gotools

    • fix find process stop and run

    • fix lookup guru for source query


  • GolangAst

    • fix astview enter key to jump


  • FileBorwser

    • fix file system enter key to jump


  • gotools

    • fix types interface method

    • types support +build for single source

    • types support -tags flag


  • tools

    • add new exportqrc tool for export liteide all build-in images