I participated in ICFP Content 2021 with regular teammates. Our team name was “manarimo”, which is what we’ve been using for years recently.
- Repository: https://github.com/manarimo/ICFPC2021/
- Portal website: http://icfpc2021-manarimo.s3-website-us-east-1.amazonaws.com/best.html
- Manual solver: http://icfpc2021-manarimo.s3-website-us-east-1.amazonaws.com/kenkoooo/#/problem/1
We ranked at the second place as of when the scoreboard was frozen. I’m happy with this result as we are #1 among Japanese teams. I was thinking we could get the global #1, however, RGBTeam did a better job then us. Potentially some other teams may be still submarining in the board, but more or less, invisible scores don’t mean anything until the final result is announced…
Summary & Approach
This year’s problem featured a game named “Brain Wall” (I hadn’t heard of this game before though, it appeared to be a component of a famous Japanese TV show). In the problem, you are given a mesh that represents a character who has to go through the hole in the wall. The hole is represented as a polygon. Then you are asked to embed the mesh into the hole, that is, move the vertices of the mesh without changing the length of edges too much. The coordinates of vertices must be integers. Then, calculate the sum of the distance between each point of the hole and the nearest vertex of the mesh. The goal is to minimize this sum.
It immediately recalled our memory from the content on 2016, which featured Origami, and we concluded that the manual solving should be the key. Given that, we decided to build a visualizer that supported manual play for the first thing. In the meanwhile, other members worked on a general solver using simmulated annealing and specialized solvers for specific patterns. After 24 hours from the contest had begun, an additional rule was announced: by placing a vertex of the mesh at a specific coordinate in the polygon, you can obtain a bonus that can be used to relax the constraints in an other specified level. For example, you may place a vertex in outside of the polygon, or may freely stretch or contract an edge ignoring the constraint on the edge length. We basically used manual approach to take the bonuses into account. A tricky part, however, was that sometimes you had to give up an optimal solution in order to obtain a bonus in a specific level, so the optimality should be considered among all levels. Since this problem really looked like a kind of optimization problem, we decided to make solutions with and without using bonuses for each level, then choose the optimal ones at the time of submission with considering whether the bonus was obtained in the dependent level. In the end, we used a linear programming library to calculate the optimal set of the solutions.
We created following tools during the contest (roughly in the order of creation):
- bruteforce: enumerates every possible arrangement of vertices, then outputs the valid and optimal one (@pepsin_amylase, @y3eadgbe)
- checker: checks the validaty of the solutions (@osa_k, @pepsin_amylase)
- gen_web: generates HTML files to list problems and solutions (@osa_k)
- visualizer: visualizer that you can manually move vertices on it (@kenkoooo & keita)
- simulated_annealing: neighbors are: moving a vertex by 1, moving an edge by 1, moving whole mesh by 1, moving a 1-degree vertex to the symmetric coordinate, moving a 2-degrees vertex to reflection coordinate, warping a vertex onto a point of the polygon. Penalties are: distance between out-of-bounds vertex and the polygon, the number of out-of-bounds edges, and deviation of length constraints (@kawatea, @y3eadgbe, @osa_k)
- contour fitter: as we noticed that some problems appeared to be generated by shuffling the vertices from the exact-fit solution (ex: 64), this tool compares the length of edges of the mesh and the polygon, then outputs the sequence of vertices that continuously fits the contour of the polygon (@yuusti)
- manten: for problems which are known to have a solution with dislike=0 from the scoreboard, try every permutation of vertices to match up with the points of the polygon (@pepsin_amylase)
- package_solutions: submits the solutions of the highest score for each level. After the bonus was introduced, it also solves a linear programming problem to detemine in which levels the bonuses should be collected (@pepsin_amylase)
- globalist-optimize: uses hill climbing to optimize the edge costs when GLOBALIST bonus is applicable (@kenkoooo)
Also, we had 4 enthusiastic manual solvers: mkut, @y3eadgbe, @kawatea, @yuusti. It was quite exciting to see they solving complex problems such as 64 or 105 without using a help of automation tools!
What I Did
I was mainly working on managing the portal size, manually creating solutions, optimization of simulated annealing, and parallel execution of SA.
Portal Site
We usually build a web application for a portal site, however, it was kind of tedious because we face troubles with it every single year - like I/O issues or accidentally making a weird bugs to break the website at undesired moment. So I decided to only use static assets this year. Originally I intended to host it on GitHub Pages, but as it turned out that it was not suppored for private repositories in the free tier, I decided to upload them to S3 and used a static website hosting feature. The problems and solutions were directly committed into the main repository. The site was built and uploaded in GitHub Actions which was triggered on push.
I feel like it was a good decision. Since our portal site merely processes data for statistics and formats them neatly, there’s no good reason to bother spinning up a server and run calculation every time requested. I was planning to switch the portal site to a React app that fetches preprocessed JSON from the S3 server in case the amount of data exceed the point that make this approach infeasible, however, we actually didn’t reach to that point. Also, it was quite handy to have all problems and solutions in the local copy of the repo while writing submission script or using existing solutions as the input for automatic solvers.
The script to generate the website was written in Ruby. I was planning to rewrite it with a staticly typed language such as Rust or TypeScript before it grows too large, but I ended up keeping extending the original Ruby script. It was tiresome in the end because I somehow directly embeded HTML fragments as string literals, without using erb or something…
Additionally, I saw a issue that all files are upload to S3 in every run of GitHub Actions, because it generates the static files to update file timestamps, even if the content remains intact. Unlike rsync does, S3 doesn’t compare the file with its hash… As a workaround, I passed --size-only
flag to s3-sync
action so that the files are uploaded only when its size had changed. It was initially working well, but as we started to often see cases that the file size remained intact while numbers in the page had changed, I ended up dropping the flag to upload everything in the end. Probably I could create a separate repo to manage the generated files.
Solution Making
At the beginning, I was solving levels that looked interesting on our visualizer to build a sense towards this year’s problem. Then I realized that the simulated annealing solver should play a crucial role for the levels whose optimal solution is nontrivial, so I requested to add a feature to load the manual solution as a initial state of the SA, and intensely ran the program with tuning initial temperature and time limit. Also I pointed out several bottleneck by looking at the output of gprof
.
For running SA in parallel, I launched a c5.24xlarge instance on EC2 and ran a Ruby script that ran as many background jobs as the number of CPU cores. I didn’t use any specific way to manage task submission, but just constructed a command line in the Ruby script, ran all commands at once, then waited until all jobs finished. Looking back at it, I feel like I should have used a job queue to simplify the management. We usually use Jenkins as a drop-in job queue in such cases, however, I was reluctant to use it in this year because I was worried about hitting problems such as management of the tasks in the queue or collecting the output files, like which we encounter every single year in some form.
Thoughts
As this year’s problem was at a quite high quality, I really enjoyed it. I feel that the problem is tuned-up pretty well by collecting the good elements from the recent contests. It was clearly based on the Origami one from 2016, but the enhancements were pushing up the quality above that. For instance, limitation on the coordinates to be intergers makes the problem easier to deal with, and with the use of bonuses such as 2017 (Punter) or 2019 (Cleaning Robot) I never felt boared. I also want to remark the hilarious animation in the submission result pages:
The levels were nicely complecated but not too much, and the bonuses were quite well designed so that they were helpful with careful thoughts but never be too powerful. I was also guessing that the participants were asked to submit the levels at some point, as like 2016, but it never happened. Acutally I feel it was a great decision of the organizer; since such an open-ended levels would have spoiled the quite well designed balance of levels prepared by the organizer.